00:00:00.000 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 92 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3270 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.033 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.033 The recommended git tool is: git 00:00:00.034 using credential 00000000-0000-0000-0000-000000000002 00:00:00.035 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.052 Fetching changes from the remote Git repository 00:00:00.053 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.123 Using shallow fetch with depth 1 00:00:00.123 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.123 > git --version # timeout=10 00:00:00.158 > git --version # 'git version 2.39.2' 00:00:00.158 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.193 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.193 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.585 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.597 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.611 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:03.611 > git config core.sparsecheckout # timeout=10 00:00:03.624 > git read-tree -mu HEAD # timeout=10 00:00:03.642 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:03.662 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:03.662 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:03.753 [Pipeline] Start of Pipeline 00:00:03.787 [Pipeline] library 00:00:03.788 Loading library shm_lib@master 00:00:03.788 Library shm_lib@master is cached. Copying from home. 00:00:03.803 [Pipeline] node 00:00:03.813 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:03.815 [Pipeline] { 00:00:03.825 [Pipeline] catchError 00:00:03.827 [Pipeline] { 00:00:03.836 [Pipeline] wrap 00:00:03.844 [Pipeline] { 00:00:03.853 [Pipeline] stage 00:00:03.855 [Pipeline] { (Prologue) 00:00:03.878 [Pipeline] echo 00:00:03.881 Node: VM-host-SM16 00:00:03.889 [Pipeline] cleanWs 00:00:03.898 [WS-CLEANUP] Deleting project workspace... 00:00:03.898 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.904 [WS-CLEANUP] done 00:00:04.323 [Pipeline] setCustomBuildProperty 00:00:04.423 [Pipeline] httpRequest 00:00:04.437 [Pipeline] echo 00:00:04.438 Sorcerer 10.211.164.101 is alive 00:00:04.443 [Pipeline] httpRequest 00:00:04.446 HttpMethod: GET 00:00:04.447 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.447 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.448 Response Code: HTTP/1.1 200 OK 00:00:04.448 Success: Status code 200 is in the accepted range: 200,404 00:00:04.449 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.280 [Pipeline] sh 00:00:05.553 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.562 [Pipeline] httpRequest 00:00:05.610 [Pipeline] echo 00:00:05.612 Sorcerer 10.211.164.101 is alive 00:00:05.618 [Pipeline] httpRequest 00:00:05.622 HttpMethod: GET 00:00:05.622 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:05.623 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:05.624 Response Code: HTTP/1.1 200 OK 00:00:05.624 Success: Status code 200 is in the accepted range: 200,404 00:00:05.624 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:24.415 [Pipeline] sh 00:00:24.691 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:27.975 [Pipeline] sh 00:00:28.250 + git -C spdk log --oneline -n5 00:00:28.250 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:00:28.250 330a4f94d nvme: check pthread_mutex_destroy() return value 00:00:28.250 7b72c3ced nvme: add nvme_ctrlr_lock 00:00:28.250 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:00:28.250 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:00:28.273 [Pipeline] withCredentials 00:00:28.282 > git --version # timeout=10 00:00:28.294 > git --version # 'git version 2.39.2' 00:00:28.308 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:28.310 [Pipeline] { 00:00:28.322 [Pipeline] retry 00:00:28.325 [Pipeline] { 00:00:28.346 [Pipeline] sh 00:00:28.621 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:28.890 [Pipeline] } 00:00:28.918 [Pipeline] // retry 00:00:28.923 [Pipeline] } 00:00:28.938 [Pipeline] // withCredentials 00:00:28.947 [Pipeline] httpRequest 00:00:28.970 [Pipeline] echo 00:00:28.971 Sorcerer 10.211.164.101 is alive 00:00:28.980 [Pipeline] httpRequest 00:00:28.983 HttpMethod: GET 00:00:28.984 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:28.984 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:28.991 Response Code: HTTP/1.1 200 OK 00:00:28.991 Success: Status code 200 is in the accepted range: 200,404 00:00:28.992 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:03.282 [Pipeline] sh 00:01:03.575 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:05.505 [Pipeline] sh 00:01:05.777 + git -C dpdk log --oneline -n5 00:01:05.777 caf0f5d395 version: 22.11.4 00:01:05.777 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:05.777 dc9c799c7d vhost: fix missing spinlock unlock 00:01:05.777 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:05.777 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:05.792 [Pipeline] writeFile 00:01:05.805 [Pipeline] sh 00:01:06.077 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:06.088 [Pipeline] sh 00:01:06.366 + cat autorun-spdk.conf 00:01:06.366 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.366 SPDK_TEST_NVMF=1 00:01:06.366 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.366 SPDK_TEST_USDT=1 00:01:06.366 SPDK_RUN_UBSAN=1 00:01:06.366 SPDK_TEST_NVMF_MDNS=1 00:01:06.366 NET_TYPE=virt 00:01:06.366 SPDK_JSONRPC_GO_CLIENT=1 00:01:06.366 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:06.366 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:06.366 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:06.372 RUN_NIGHTLY=1 00:01:06.375 [Pipeline] } 00:01:06.392 [Pipeline] // stage 00:01:06.407 [Pipeline] stage 00:01:06.409 [Pipeline] { (Run VM) 00:01:06.422 [Pipeline] sh 00:01:06.698 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:06.698 + echo 'Start stage prepare_nvme.sh' 00:01:06.698 Start stage prepare_nvme.sh 00:01:06.698 + [[ -n 7 ]] 00:01:06.698 + disk_prefix=ex7 00:01:06.698 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:06.698 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:06.698 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:06.698 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.698 ++ SPDK_TEST_NVMF=1 00:01:06.698 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.698 ++ SPDK_TEST_USDT=1 00:01:06.698 ++ SPDK_RUN_UBSAN=1 00:01:06.698 ++ SPDK_TEST_NVMF_MDNS=1 00:01:06.698 ++ NET_TYPE=virt 00:01:06.698 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:06.698 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:06.698 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:06.698 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:06.698 ++ RUN_NIGHTLY=1 00:01:06.698 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:06.698 + nvme_files=() 00:01:06.698 + declare -A nvme_files 00:01:06.698 + backend_dir=/var/lib/libvirt/images/backends 00:01:06.698 + nvme_files['nvme.img']=5G 00:01:06.698 + nvme_files['nvme-cmb.img']=5G 00:01:06.698 + nvme_files['nvme-multi0.img']=4G 00:01:06.698 + nvme_files['nvme-multi1.img']=4G 00:01:06.698 + nvme_files['nvme-multi2.img']=4G 00:01:06.698 + nvme_files['nvme-openstack.img']=8G 00:01:06.698 + nvme_files['nvme-zns.img']=5G 00:01:06.698 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:06.698 + (( SPDK_TEST_FTL == 1 )) 00:01:06.698 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:06.698 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:06.698 + for nvme in "${!nvme_files[@]}" 00:01:06.698 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:06.698 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:06.698 + for nvme in "${!nvme_files[@]}" 00:01:06.698 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:07.262 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:07.262 + for nvme in "${!nvme_files[@]}" 00:01:07.262 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:07.262 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:07.262 + for nvme in "${!nvme_files[@]}" 00:01:07.262 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:07.262 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:07.262 + for nvme in "${!nvme_files[@]}" 00:01:07.262 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:07.520 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:07.520 + for nvme in "${!nvme_files[@]}" 00:01:07.520 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:07.520 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:07.520 + for nvme in "${!nvme_files[@]}" 00:01:07.520 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:08.084 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:08.084 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:08.084 + echo 'End stage prepare_nvme.sh' 00:01:08.084 End stage prepare_nvme.sh 00:01:08.096 [Pipeline] sh 00:01:08.411 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:08.411 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora38 00:01:08.411 00:01:08.411 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:08.411 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:08.411 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:08.411 HELP=0 00:01:08.411 DRY_RUN=0 00:01:08.411 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:01:08.411 NVME_DISKS_TYPE=nvme,nvme, 00:01:08.411 NVME_AUTO_CREATE=0 00:01:08.411 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:01:08.411 NVME_CMB=,, 00:01:08.411 NVME_PMR=,, 00:01:08.411 NVME_ZNS=,, 00:01:08.411 NVME_MS=,, 00:01:08.411 NVME_FDP=,, 00:01:08.411 SPDK_VAGRANT_DISTRO=fedora38 00:01:08.411 SPDK_VAGRANT_VMCPU=10 00:01:08.411 SPDK_VAGRANT_VMRAM=12288 00:01:08.411 SPDK_VAGRANT_PROVIDER=libvirt 00:01:08.411 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:08.411 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:08.411 SPDK_OPENSTACK_NETWORK=0 00:01:08.411 VAGRANT_PACKAGE_BOX=0 00:01:08.411 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:08.411 FORCE_DISTRO=true 00:01:08.411 VAGRANT_BOX_VERSION= 00:01:08.411 EXTRA_VAGRANTFILES= 00:01:08.411 NIC_MODEL=e1000 00:01:08.411 00:01:08.411 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:01:08.411 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:11.687 Bringing machine 'default' up with 'libvirt' provider... 00:01:12.253 ==> default: Creating image (snapshot of base box volume). 00:01:12.510 ==> default: Creating domain with the following settings... 00:01:12.510 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721048408_5c9826a936283a553b23 00:01:12.510 ==> default: -- Domain type: kvm 00:01:12.510 ==> default: -- Cpus: 10 00:01:12.510 ==> default: -- Feature: acpi 00:01:12.510 ==> default: -- Feature: apic 00:01:12.510 ==> default: -- Feature: pae 00:01:12.510 ==> default: -- Memory: 12288M 00:01:12.510 ==> default: -- Memory Backing: hugepages: 00:01:12.510 ==> default: -- Management MAC: 00:01:12.510 ==> default: -- Loader: 00:01:12.510 ==> default: -- Nvram: 00:01:12.510 ==> default: -- Base box: spdk/fedora38 00:01:12.510 ==> default: -- Storage pool: default 00:01:12.510 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721048408_5c9826a936283a553b23.img (20G) 00:01:12.510 ==> default: -- Volume Cache: default 00:01:12.510 ==> default: -- Kernel: 00:01:12.510 ==> default: -- Initrd: 00:01:12.510 ==> default: -- Graphics Type: vnc 00:01:12.510 ==> default: -- Graphics Port: -1 00:01:12.510 ==> default: -- Graphics IP: 127.0.0.1 00:01:12.510 ==> default: -- Graphics Password: Not defined 00:01:12.510 ==> default: -- Video Type: cirrus 00:01:12.510 ==> default: -- Video VRAM: 9216 00:01:12.510 ==> default: -- Sound Type: 00:01:12.510 ==> default: -- Keymap: en-us 00:01:12.510 ==> default: -- TPM Path: 00:01:12.510 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:12.510 ==> default: -- Command line args: 00:01:12.510 ==> default: -> value=-device, 00:01:12.510 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:12.510 ==> default: -> value=-drive, 00:01:12.510 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:12.510 ==> default: -> value=-device, 00:01:12.510 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.510 ==> default: -> value=-device, 00:01:12.510 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:12.510 ==> default: -> value=-drive, 00:01:12.510 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:12.510 ==> default: -> value=-device, 00:01:12.510 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.510 ==> default: -> value=-drive, 00:01:12.510 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:12.510 ==> default: -> value=-device, 00:01:12.510 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.510 ==> default: -> value=-drive, 00:01:12.510 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:12.510 ==> default: -> value=-device, 00:01:12.510 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.510 ==> default: Creating shared folders metadata... 00:01:12.510 ==> default: Starting domain. 00:01:14.407 ==> default: Waiting for domain to get an IP address... 00:01:29.284 ==> default: Waiting for SSH to become available... 00:01:30.662 ==> default: Configuring and enabling network interfaces... 00:01:35.922 default: SSH address: 192.168.121.63:22 00:01:35.922 default: SSH username: vagrant 00:01:35.922 default: SSH auth method: private key 00:01:37.818 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:44.494 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:51.108 ==> default: Mounting SSHFS shared folder... 00:01:51.672 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:51.672 ==> default: Checking Mount.. 00:01:53.044 ==> default: Folder Successfully Mounted! 00:01:53.044 ==> default: Running provisioner: file... 00:01:53.667 default: ~/.gitconfig => .gitconfig 00:01:54.232 00:01:54.232 SUCCESS! 00:01:54.232 00:01:54.232 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:54.232 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:54.232 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:54.232 00:01:54.241 [Pipeline] } 00:01:54.258 [Pipeline] // stage 00:01:54.269 [Pipeline] dir 00:01:54.270 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:01:54.272 [Pipeline] { 00:01:54.286 [Pipeline] catchError 00:01:54.287 [Pipeline] { 00:01:54.299 [Pipeline] sh 00:01:54.575 + vagrant ssh-config --host vagrant 00:01:54.575 + sed -ne /^Host/,$p 00:01:54.575 + tee ssh_conf 00:01:58.816 Host vagrant 00:01:58.816 HostName 192.168.121.63 00:01:58.816 User vagrant 00:01:58.816 Port 22 00:01:58.816 UserKnownHostsFile /dev/null 00:01:58.816 StrictHostKeyChecking no 00:01:58.816 PasswordAuthentication no 00:01:58.816 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:58.816 IdentitiesOnly yes 00:01:58.816 LogLevel FATAL 00:01:58.816 ForwardAgent yes 00:01:58.816 ForwardX11 yes 00:01:58.816 00:01:58.830 [Pipeline] withEnv 00:01:58.833 [Pipeline] { 00:01:58.851 [Pipeline] sh 00:01:59.131 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:59.131 source /etc/os-release 00:01:59.131 [[ -e /image.version ]] && img=$(< /image.version) 00:01:59.131 # Minimal, systemd-like check. 00:01:59.131 if [[ -e /.dockerenv ]]; then 00:01:59.131 # Clear garbage from the node's name: 00:01:59.131 # agt-er_autotest_547-896 -> autotest_547-896 00:01:59.131 # $HOSTNAME is the actual container id 00:01:59.131 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:59.131 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:59.131 # We can assume this is a mount from a host where container is running, 00:01:59.131 # so fetch its hostname to easily identify the target swarm worker. 00:01:59.131 container="$(< /etc/hostname) ($agent)" 00:01:59.131 else 00:01:59.131 # Fallback 00:01:59.131 container=$agent 00:01:59.131 fi 00:01:59.131 fi 00:01:59.131 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:59.131 00:01:59.400 [Pipeline] } 00:01:59.421 [Pipeline] // withEnv 00:01:59.430 [Pipeline] setCustomBuildProperty 00:01:59.448 [Pipeline] stage 00:01:59.452 [Pipeline] { (Tests) 00:01:59.473 [Pipeline] sh 00:01:59.749 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:59.769 [Pipeline] sh 00:02:00.048 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:00.323 [Pipeline] timeout 00:02:00.324 Timeout set to expire in 40 min 00:02:00.326 [Pipeline] { 00:02:00.345 [Pipeline] sh 00:02:00.625 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:01.191 HEAD is now at 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:02:01.205 [Pipeline] sh 00:02:01.481 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:01.753 [Pipeline] sh 00:02:02.031 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:02.051 [Pipeline] sh 00:02:02.329 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:02.329 ++ readlink -f spdk_repo 00:02:02.329 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:02.329 + [[ -n /home/vagrant/spdk_repo ]] 00:02:02.329 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:02.329 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:02.329 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:02.329 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:02.329 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:02.329 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:02.329 + cd /home/vagrant/spdk_repo 00:02:02.329 + source /etc/os-release 00:02:02.329 ++ NAME='Fedora Linux' 00:02:02.329 ++ VERSION='38 (Cloud Edition)' 00:02:02.329 ++ ID=fedora 00:02:02.329 ++ VERSION_ID=38 00:02:02.329 ++ VERSION_CODENAME= 00:02:02.329 ++ PLATFORM_ID=platform:f38 00:02:02.329 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:02.329 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:02.329 ++ LOGO=fedora-logo-icon 00:02:02.329 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:02.329 ++ HOME_URL=https://fedoraproject.org/ 00:02:02.329 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:02.329 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:02.329 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:02.329 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:02.329 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:02.329 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:02.329 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:02.329 ++ SUPPORT_END=2024-05-14 00:02:02.329 ++ VARIANT='Cloud Edition' 00:02:02.330 ++ VARIANT_ID=cloud 00:02:02.330 + uname -a 00:02:02.587 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:02.588 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:02.846 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:02.846 Hugepages 00:02:02.846 node hugesize free / total 00:02:02.846 node0 1048576kB 0 / 0 00:02:02.846 node0 2048kB 0 / 0 00:02:02.846 00:02:02.846 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:02.846 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:03.104 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:03.104 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:03.104 + rm -f /tmp/spdk-ld-path 00:02:03.104 + source autorun-spdk.conf 00:02:03.104 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.104 ++ SPDK_TEST_NVMF=1 00:02:03.104 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:03.104 ++ SPDK_TEST_USDT=1 00:02:03.104 ++ SPDK_RUN_UBSAN=1 00:02:03.104 ++ SPDK_TEST_NVMF_MDNS=1 00:02:03.104 ++ NET_TYPE=virt 00:02:03.104 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:03.104 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:03.104 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:03.104 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.104 ++ RUN_NIGHTLY=1 00:02:03.104 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:03.104 + [[ -n '' ]] 00:02:03.104 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:03.104 + for M in /var/spdk/build-*-manifest.txt 00:02:03.104 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:03.104 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.104 + for M in /var/spdk/build-*-manifest.txt 00:02:03.104 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:03.104 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.104 ++ uname 00:02:03.104 + [[ Linux == \L\i\n\u\x ]] 00:02:03.104 + sudo dmesg -T 00:02:03.104 + sudo dmesg --clear 00:02:03.104 + dmesg_pid=6011 00:02:03.104 + sudo dmesg -Tw 00:02:03.104 + [[ Fedora Linux == FreeBSD ]] 00:02:03.104 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:03.104 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:03.104 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:03.104 + [[ -x /usr/src/fio-static/fio ]] 00:02:03.104 + export FIO_BIN=/usr/src/fio-static/fio 00:02:03.104 + FIO_BIN=/usr/src/fio-static/fio 00:02:03.104 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:03.104 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:03.104 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:03.104 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:03.104 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:03.104 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:03.104 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:03.104 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:03.104 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:03.104 Test configuration: 00:02:03.104 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.104 SPDK_TEST_NVMF=1 00:02:03.104 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:03.104 SPDK_TEST_USDT=1 00:02:03.104 SPDK_RUN_UBSAN=1 00:02:03.104 SPDK_TEST_NVMF_MDNS=1 00:02:03.104 NET_TYPE=virt 00:02:03.104 SPDK_JSONRPC_GO_CLIENT=1 00:02:03.104 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:03.104 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:03.104 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.104 RUN_NIGHTLY=1 13:00:59 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:03.104 13:00:59 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:03.104 13:00:59 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:03.104 13:00:59 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:03.104 13:00:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.104 13:00:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.104 13:00:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.105 13:00:59 -- paths/export.sh@5 -- $ export PATH 00:02:03.105 13:00:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.105 13:00:59 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:03.105 13:00:59 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:03.105 13:00:59 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721048459.XXXXXX 00:02:03.363 13:00:59 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721048459.b5ARbl 00:02:03.363 13:00:59 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:03.363 13:00:59 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:02:03.363 13:00:59 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:03.363 13:00:59 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:03.363 13:00:59 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:03.363 13:00:59 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:03.363 13:00:59 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:03.363 13:00:59 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:02:03.363 13:00:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.363 13:00:59 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:02:03.363 13:00:59 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:03.363 13:00:59 -- pm/common@17 -- $ local monitor 00:02:03.363 13:00:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.363 13:00:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.363 13:00:59 -- pm/common@25 -- $ sleep 1 00:02:03.363 13:00:59 -- pm/common@21 -- $ date +%s 00:02:03.363 13:00:59 -- pm/common@21 -- $ date +%s 00:02:03.363 13:00:59 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721048459 00:02:03.363 13:00:59 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721048459 00:02:03.363 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721048459_collect-vmstat.pm.log 00:02:03.363 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721048459_collect-cpu-load.pm.log 00:02:04.298 13:01:00 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:04.298 13:01:00 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:04.298 13:01:00 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:04.298 13:01:00 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:04.298 13:01:00 -- spdk/autobuild.sh@16 -- $ date -u 00:02:04.298 Mon Jul 15 01:01:00 PM UTC 2024 00:02:04.298 13:01:00 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:04.298 v24.05-13-g5fa2f5086 00:02:04.298 13:01:00 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:04.298 13:01:00 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:04.298 13:01:00 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:04.298 13:01:00 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:04.299 13:01:00 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:04.299 13:01:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.299 ************************************ 00:02:04.299 START TEST ubsan 00:02:04.299 ************************************ 00:02:04.299 using ubsan 00:02:04.299 13:01:00 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:02:04.299 00:02:04.299 real 0m0.000s 00:02:04.299 user 0m0.000s 00:02:04.299 sys 0m0.000s 00:02:04.299 13:01:00 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:04.299 13:01:00 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:04.299 ************************************ 00:02:04.299 END TEST ubsan 00:02:04.299 ************************************ 00:02:04.299 13:01:00 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:04.299 13:01:00 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:04.299 13:01:00 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:04.299 13:01:00 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:02:04.299 13:01:00 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:04.299 13:01:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.299 ************************************ 00:02:04.299 START TEST build_native_dpdk 00:02:04.299 ************************************ 00:02:04.299 13:01:00 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:04.299 caf0f5d395 version: 22.11.4 00:02:04.299 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:04.299 dc9c799c7d vhost: fix missing spinlock unlock 00:02:04.299 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:04.299 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:04.299 13:01:00 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:04.299 13:01:00 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:04.299 13:01:00 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:04.299 13:01:00 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:04.299 13:01:00 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:04.299 13:01:00 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:04.299 13:01:00 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:04.299 13:01:00 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:04.299 13:01:00 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:04.299 13:01:00 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:04.299 13:01:00 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:04.299 13:01:00 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:04.299 13:01:00 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:04.299 13:01:00 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:04.299 13:01:00 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:04.299 13:01:00 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:04.299 13:01:00 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:04.299 13:01:00 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:04.299 13:01:00 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:04.299 13:01:00 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:04.299 13:01:00 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:04.299 13:01:00 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:04.299 13:01:01 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:04.299 13:01:01 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:04.299 13:01:01 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:04.299 13:01:01 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:04.299 13:01:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:04.299 13:01:01 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:04.299 13:01:01 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:04.299 patching file config/rte_config.h 00:02:04.299 Hunk #1 succeeded at 60 (offset 1 line). 00:02:04.299 13:01:01 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:04.299 13:01:01 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:02:04.299 13:01:01 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:04.299 13:01:01 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:04.299 13:01:01 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:09.560 The Meson build system 00:02:09.560 Version: 1.3.1 00:02:09.560 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:09.560 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:09.560 Build type: native build 00:02:09.560 Program cat found: YES (/usr/bin/cat) 00:02:09.560 Project name: DPDK 00:02:09.560 Project version: 22.11.4 00:02:09.560 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:09.560 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:09.560 Host machine cpu family: x86_64 00:02:09.560 Host machine cpu: x86_64 00:02:09.560 Message: ## Building in Developer Mode ## 00:02:09.560 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:09.561 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:09.561 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:09.561 Program objdump found: YES (/usr/bin/objdump) 00:02:09.561 Program python3 found: YES (/usr/bin/python3) 00:02:09.561 Program cat found: YES (/usr/bin/cat) 00:02:09.561 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:09.561 Checking for size of "void *" : 8 00:02:09.561 Checking for size of "void *" : 8 (cached) 00:02:09.561 Library m found: YES 00:02:09.561 Library numa found: YES 00:02:09.561 Has header "numaif.h" : YES 00:02:09.561 Library fdt found: NO 00:02:09.561 Library execinfo found: NO 00:02:09.561 Has header "execinfo.h" : YES 00:02:09.561 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:09.561 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:09.561 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:09.561 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:09.561 Run-time dependency openssl found: YES 3.0.9 00:02:09.561 Run-time dependency libpcap found: YES 1.10.4 00:02:09.561 Has header "pcap.h" with dependency libpcap: YES 00:02:09.561 Compiler for C supports arguments -Wcast-qual: YES 00:02:09.561 Compiler for C supports arguments -Wdeprecated: YES 00:02:09.561 Compiler for C supports arguments -Wformat: YES 00:02:09.561 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:09.561 Compiler for C supports arguments -Wformat-security: NO 00:02:09.561 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:09.561 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:09.561 Compiler for C supports arguments -Wnested-externs: YES 00:02:09.561 Compiler for C supports arguments -Wold-style-definition: YES 00:02:09.561 Compiler for C supports arguments -Wpointer-arith: YES 00:02:09.561 Compiler for C supports arguments -Wsign-compare: YES 00:02:09.561 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:09.561 Compiler for C supports arguments -Wundef: YES 00:02:09.561 Compiler for C supports arguments -Wwrite-strings: YES 00:02:09.561 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:09.561 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:09.561 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:09.561 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:09.561 Compiler for C supports arguments -mavx512f: YES 00:02:09.561 Checking if "AVX512 checking" compiles: YES 00:02:09.561 Fetching value of define "__SSE4_2__" : 1 00:02:09.561 Fetching value of define "__AES__" : 1 00:02:09.561 Fetching value of define "__AVX__" : 1 00:02:09.561 Fetching value of define "__AVX2__" : 1 00:02:09.561 Fetching value of define "__AVX512BW__" : (undefined) 00:02:09.561 Fetching value of define "__AVX512CD__" : (undefined) 00:02:09.561 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:09.561 Fetching value of define "__AVX512F__" : (undefined) 00:02:09.561 Fetching value of define "__AVX512VL__" : (undefined) 00:02:09.561 Fetching value of define "__PCLMUL__" : 1 00:02:09.561 Fetching value of define "__RDRND__" : 1 00:02:09.561 Fetching value of define "__RDSEED__" : 1 00:02:09.561 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:09.561 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:09.561 Message: lib/kvargs: Defining dependency "kvargs" 00:02:09.561 Message: lib/telemetry: Defining dependency "telemetry" 00:02:09.561 Checking for function "getentropy" : YES 00:02:09.561 Message: lib/eal: Defining dependency "eal" 00:02:09.561 Message: lib/ring: Defining dependency "ring" 00:02:09.561 Message: lib/rcu: Defining dependency "rcu" 00:02:09.561 Message: lib/mempool: Defining dependency "mempool" 00:02:09.561 Message: lib/mbuf: Defining dependency "mbuf" 00:02:09.561 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:09.561 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:09.561 Compiler for C supports arguments -mpclmul: YES 00:02:09.561 Compiler for C supports arguments -maes: YES 00:02:09.561 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:09.561 Compiler for C supports arguments -mavx512bw: YES 00:02:09.561 Compiler for C supports arguments -mavx512dq: YES 00:02:09.561 Compiler for C supports arguments -mavx512vl: YES 00:02:09.561 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:09.561 Compiler for C supports arguments -mavx2: YES 00:02:09.561 Compiler for C supports arguments -mavx: YES 00:02:09.561 Message: lib/net: Defining dependency "net" 00:02:09.561 Message: lib/meter: Defining dependency "meter" 00:02:09.561 Message: lib/ethdev: Defining dependency "ethdev" 00:02:09.561 Message: lib/pci: Defining dependency "pci" 00:02:09.561 Message: lib/cmdline: Defining dependency "cmdline" 00:02:09.561 Message: lib/metrics: Defining dependency "metrics" 00:02:09.561 Message: lib/hash: Defining dependency "hash" 00:02:09.561 Message: lib/timer: Defining dependency "timer" 00:02:09.561 Fetching value of define "__AVX2__" : 1 (cached) 00:02:09.561 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:09.561 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:09.561 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:09.561 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:09.561 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:09.561 Message: lib/acl: Defining dependency "acl" 00:02:09.561 Message: lib/bbdev: Defining dependency "bbdev" 00:02:09.561 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:09.561 Run-time dependency libelf found: YES 0.190 00:02:09.561 Message: lib/bpf: Defining dependency "bpf" 00:02:09.561 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:09.561 Message: lib/compressdev: Defining dependency "compressdev" 00:02:09.561 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:09.561 Message: lib/distributor: Defining dependency "distributor" 00:02:09.561 Message: lib/efd: Defining dependency "efd" 00:02:09.561 Message: lib/eventdev: Defining dependency "eventdev" 00:02:09.561 Message: lib/gpudev: Defining dependency "gpudev" 00:02:09.561 Message: lib/gro: Defining dependency "gro" 00:02:09.561 Message: lib/gso: Defining dependency "gso" 00:02:09.561 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:09.561 Message: lib/jobstats: Defining dependency "jobstats" 00:02:09.561 Message: lib/latencystats: Defining dependency "latencystats" 00:02:09.561 Message: lib/lpm: Defining dependency "lpm" 00:02:09.561 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:09.561 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:09.561 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:09.561 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:09.561 Message: lib/member: Defining dependency "member" 00:02:09.561 Message: lib/pcapng: Defining dependency "pcapng" 00:02:09.561 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:09.561 Message: lib/power: Defining dependency "power" 00:02:09.561 Message: lib/rawdev: Defining dependency "rawdev" 00:02:09.561 Message: lib/regexdev: Defining dependency "regexdev" 00:02:09.561 Message: lib/dmadev: Defining dependency "dmadev" 00:02:09.561 Message: lib/rib: Defining dependency "rib" 00:02:09.561 Message: lib/reorder: Defining dependency "reorder" 00:02:09.561 Message: lib/sched: Defining dependency "sched" 00:02:09.561 Message: lib/security: Defining dependency "security" 00:02:09.561 Message: lib/stack: Defining dependency "stack" 00:02:09.561 Has header "linux/userfaultfd.h" : YES 00:02:09.561 Message: lib/vhost: Defining dependency "vhost" 00:02:09.561 Message: lib/ipsec: Defining dependency "ipsec" 00:02:09.561 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:09.561 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:09.561 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:09.561 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:09.561 Message: lib/fib: Defining dependency "fib" 00:02:09.561 Message: lib/port: Defining dependency "port" 00:02:09.561 Message: lib/pdump: Defining dependency "pdump" 00:02:09.561 Message: lib/table: Defining dependency "table" 00:02:09.561 Message: lib/pipeline: Defining dependency "pipeline" 00:02:09.561 Message: lib/graph: Defining dependency "graph" 00:02:09.561 Message: lib/node: Defining dependency "node" 00:02:09.561 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:09.561 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:09.561 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:09.561 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:09.561 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:09.561 Compiler for C supports arguments -Wno-unused-value: YES 00:02:09.561 Compiler for C supports arguments -Wno-format: YES 00:02:09.561 Compiler for C supports arguments -Wno-format-security: YES 00:02:09.561 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:10.937 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:10.938 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:10.938 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:10.938 Fetching value of define "__AVX2__" : 1 (cached) 00:02:10.938 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:10.938 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:10.938 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:10.938 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:10.938 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:10.938 Program doxygen found: YES (/usr/bin/doxygen) 00:02:10.938 Configuring doxy-api.conf using configuration 00:02:10.938 Program sphinx-build found: NO 00:02:10.938 Configuring rte_build_config.h using configuration 00:02:10.938 Message: 00:02:10.938 ================= 00:02:10.938 Applications Enabled 00:02:10.938 ================= 00:02:10.938 00:02:10.938 apps: 00:02:10.938 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:10.938 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:10.938 test-security-perf, 00:02:10.938 00:02:10.938 Message: 00:02:10.938 ================= 00:02:10.938 Libraries Enabled 00:02:10.938 ================= 00:02:10.938 00:02:10.938 libs: 00:02:10.938 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:10.938 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:10.938 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:10.938 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:10.938 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:10.938 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:10.938 table, pipeline, graph, node, 00:02:10.938 00:02:10.938 Message: 00:02:10.938 =============== 00:02:10.938 Drivers Enabled 00:02:10.938 =============== 00:02:10.938 00:02:10.938 common: 00:02:10.938 00:02:10.938 bus: 00:02:10.938 pci, vdev, 00:02:10.938 mempool: 00:02:10.938 ring, 00:02:10.938 dma: 00:02:10.938 00:02:10.938 net: 00:02:10.938 i40e, 00:02:10.938 raw: 00:02:10.938 00:02:10.938 crypto: 00:02:10.938 00:02:10.938 compress: 00:02:10.938 00:02:10.938 regex: 00:02:10.938 00:02:10.938 vdpa: 00:02:10.938 00:02:10.938 event: 00:02:10.938 00:02:10.938 baseband: 00:02:10.938 00:02:10.938 gpu: 00:02:10.938 00:02:10.938 00:02:10.938 Message: 00:02:10.938 ================= 00:02:10.938 Content Skipped 00:02:10.938 ================= 00:02:10.938 00:02:10.938 apps: 00:02:10.938 00:02:10.938 libs: 00:02:10.938 kni: explicitly disabled via build config (deprecated lib) 00:02:10.938 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:10.938 00:02:10.938 drivers: 00:02:10.938 common/cpt: not in enabled drivers build config 00:02:10.938 common/dpaax: not in enabled drivers build config 00:02:10.938 common/iavf: not in enabled drivers build config 00:02:10.938 common/idpf: not in enabled drivers build config 00:02:10.938 common/mvep: not in enabled drivers build config 00:02:10.938 common/octeontx: not in enabled drivers build config 00:02:10.938 bus/auxiliary: not in enabled drivers build config 00:02:10.938 bus/dpaa: not in enabled drivers build config 00:02:10.938 bus/fslmc: not in enabled drivers build config 00:02:10.938 bus/ifpga: not in enabled drivers build config 00:02:10.938 bus/vmbus: not in enabled drivers build config 00:02:10.938 common/cnxk: not in enabled drivers build config 00:02:10.938 common/mlx5: not in enabled drivers build config 00:02:10.938 common/qat: not in enabled drivers build config 00:02:10.938 common/sfc_efx: not in enabled drivers build config 00:02:10.938 mempool/bucket: not in enabled drivers build config 00:02:10.938 mempool/cnxk: not in enabled drivers build config 00:02:10.938 mempool/dpaa: not in enabled drivers build config 00:02:10.938 mempool/dpaa2: not in enabled drivers build config 00:02:10.938 mempool/octeontx: not in enabled drivers build config 00:02:10.938 mempool/stack: not in enabled drivers build config 00:02:10.938 dma/cnxk: not in enabled drivers build config 00:02:10.938 dma/dpaa: not in enabled drivers build config 00:02:10.938 dma/dpaa2: not in enabled drivers build config 00:02:10.938 dma/hisilicon: not in enabled drivers build config 00:02:10.938 dma/idxd: not in enabled drivers build config 00:02:10.938 dma/ioat: not in enabled drivers build config 00:02:10.938 dma/skeleton: not in enabled drivers build config 00:02:10.938 net/af_packet: not in enabled drivers build config 00:02:10.938 net/af_xdp: not in enabled drivers build config 00:02:10.938 net/ark: not in enabled drivers build config 00:02:10.938 net/atlantic: not in enabled drivers build config 00:02:10.938 net/avp: not in enabled drivers build config 00:02:10.938 net/axgbe: not in enabled drivers build config 00:02:10.938 net/bnx2x: not in enabled drivers build config 00:02:10.938 net/bnxt: not in enabled drivers build config 00:02:10.938 net/bonding: not in enabled drivers build config 00:02:10.938 net/cnxk: not in enabled drivers build config 00:02:10.938 net/cxgbe: not in enabled drivers build config 00:02:10.938 net/dpaa: not in enabled drivers build config 00:02:10.938 net/dpaa2: not in enabled drivers build config 00:02:10.938 net/e1000: not in enabled drivers build config 00:02:10.938 net/ena: not in enabled drivers build config 00:02:10.938 net/enetc: not in enabled drivers build config 00:02:10.938 net/enetfec: not in enabled drivers build config 00:02:10.938 net/enic: not in enabled drivers build config 00:02:10.938 net/failsafe: not in enabled drivers build config 00:02:10.938 net/fm10k: not in enabled drivers build config 00:02:10.938 net/gve: not in enabled drivers build config 00:02:10.938 net/hinic: not in enabled drivers build config 00:02:10.938 net/hns3: not in enabled drivers build config 00:02:10.938 net/iavf: not in enabled drivers build config 00:02:10.938 net/ice: not in enabled drivers build config 00:02:10.938 net/idpf: not in enabled drivers build config 00:02:10.938 net/igc: not in enabled drivers build config 00:02:10.938 net/ionic: not in enabled drivers build config 00:02:10.938 net/ipn3ke: not in enabled drivers build config 00:02:10.938 net/ixgbe: not in enabled drivers build config 00:02:10.938 net/kni: not in enabled drivers build config 00:02:10.938 net/liquidio: not in enabled drivers build config 00:02:10.938 net/mana: not in enabled drivers build config 00:02:10.938 net/memif: not in enabled drivers build config 00:02:10.938 net/mlx4: not in enabled drivers build config 00:02:10.938 net/mlx5: not in enabled drivers build config 00:02:10.938 net/mvneta: not in enabled drivers build config 00:02:10.938 net/mvpp2: not in enabled drivers build config 00:02:10.938 net/netvsc: not in enabled drivers build config 00:02:10.938 net/nfb: not in enabled drivers build config 00:02:10.938 net/nfp: not in enabled drivers build config 00:02:10.938 net/ngbe: not in enabled drivers build config 00:02:10.938 net/null: not in enabled drivers build config 00:02:10.938 net/octeontx: not in enabled drivers build config 00:02:10.938 net/octeon_ep: not in enabled drivers build config 00:02:10.938 net/pcap: not in enabled drivers build config 00:02:10.938 net/pfe: not in enabled drivers build config 00:02:10.939 net/qede: not in enabled drivers build config 00:02:10.939 net/ring: not in enabled drivers build config 00:02:10.939 net/sfc: not in enabled drivers build config 00:02:10.939 net/softnic: not in enabled drivers build config 00:02:10.939 net/tap: not in enabled drivers build config 00:02:10.939 net/thunderx: not in enabled drivers build config 00:02:10.939 net/txgbe: not in enabled drivers build config 00:02:10.939 net/vdev_netvsc: not in enabled drivers build config 00:02:10.939 net/vhost: not in enabled drivers build config 00:02:10.939 net/virtio: not in enabled drivers build config 00:02:10.939 net/vmxnet3: not in enabled drivers build config 00:02:10.939 raw/cnxk_bphy: not in enabled drivers build config 00:02:10.939 raw/cnxk_gpio: not in enabled drivers build config 00:02:10.939 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:10.939 raw/ifpga: not in enabled drivers build config 00:02:10.939 raw/ntb: not in enabled drivers build config 00:02:10.939 raw/skeleton: not in enabled drivers build config 00:02:10.939 crypto/armv8: not in enabled drivers build config 00:02:10.939 crypto/bcmfs: not in enabled drivers build config 00:02:10.939 crypto/caam_jr: not in enabled drivers build config 00:02:10.939 crypto/ccp: not in enabled drivers build config 00:02:10.939 crypto/cnxk: not in enabled drivers build config 00:02:10.939 crypto/dpaa_sec: not in enabled drivers build config 00:02:10.939 crypto/dpaa2_sec: not in enabled drivers build config 00:02:10.939 crypto/ipsec_mb: not in enabled drivers build config 00:02:10.939 crypto/mlx5: not in enabled drivers build config 00:02:10.939 crypto/mvsam: not in enabled drivers build config 00:02:10.939 crypto/nitrox: not in enabled drivers build config 00:02:10.939 crypto/null: not in enabled drivers build config 00:02:10.939 crypto/octeontx: not in enabled drivers build config 00:02:10.939 crypto/openssl: not in enabled drivers build config 00:02:10.939 crypto/scheduler: not in enabled drivers build config 00:02:10.939 crypto/uadk: not in enabled drivers build config 00:02:10.939 crypto/virtio: not in enabled drivers build config 00:02:10.939 compress/isal: not in enabled drivers build config 00:02:10.939 compress/mlx5: not in enabled drivers build config 00:02:10.939 compress/octeontx: not in enabled drivers build config 00:02:10.939 compress/zlib: not in enabled drivers build config 00:02:10.939 regex/mlx5: not in enabled drivers build config 00:02:10.939 regex/cn9k: not in enabled drivers build config 00:02:10.939 vdpa/ifc: not in enabled drivers build config 00:02:10.939 vdpa/mlx5: not in enabled drivers build config 00:02:10.939 vdpa/sfc: not in enabled drivers build config 00:02:10.939 event/cnxk: not in enabled drivers build config 00:02:10.939 event/dlb2: not in enabled drivers build config 00:02:10.939 event/dpaa: not in enabled drivers build config 00:02:10.939 event/dpaa2: not in enabled drivers build config 00:02:10.939 event/dsw: not in enabled drivers build config 00:02:10.939 event/opdl: not in enabled drivers build config 00:02:10.939 event/skeleton: not in enabled drivers build config 00:02:10.939 event/sw: not in enabled drivers build config 00:02:10.939 event/octeontx: not in enabled drivers build config 00:02:10.939 baseband/acc: not in enabled drivers build config 00:02:10.939 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:10.939 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:10.939 baseband/la12xx: not in enabled drivers build config 00:02:10.939 baseband/null: not in enabled drivers build config 00:02:10.939 baseband/turbo_sw: not in enabled drivers build config 00:02:10.939 gpu/cuda: not in enabled drivers build config 00:02:10.939 00:02:10.939 00:02:10.939 Build targets in project: 314 00:02:10.939 00:02:10.939 DPDK 22.11.4 00:02:10.939 00:02:10.939 User defined options 00:02:10.939 libdir : lib 00:02:10.939 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:10.939 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:10.939 c_link_args : 00:02:10.939 enable_docs : false 00:02:10.939 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:10.939 enable_kmods : false 00:02:10.939 machine : native 00:02:10.939 tests : false 00:02:10.939 00:02:10.939 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:10.939 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:10.939 13:01:07 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:10.939 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:10.939 [1/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:10.939 [2/743] Generating lib/rte_telemetry_def with a custom command 00:02:10.939 [3/743] Generating lib/rte_kvargs_def with a custom command 00:02:10.939 [4/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:10.939 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:11.197 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:11.197 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:11.197 [8/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:11.197 [9/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:11.197 [10/743] Linking static target lib/librte_kvargs.a 00:02:11.197 [11/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:11.197 [12/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:11.197 [13/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:11.197 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:11.197 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:11.197 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:11.197 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:11.455 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:11.455 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:11.455 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.455 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:11.455 [22/743] Linking target lib/librte_kvargs.so.23.0 00:02:11.455 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:11.455 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:11.455 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:11.455 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:11.455 [27/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:11.713 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:11.713 [29/743] Linking static target lib/librte_telemetry.a 00:02:11.713 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:11.713 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:11.713 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:11.713 [33/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:11.713 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:11.713 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:11.713 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:11.713 [37/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:11.713 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:11.713 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:11.713 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:11.972 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:11.972 [42/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.972 [43/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:11.972 [44/743] Linking target lib/librte_telemetry.so.23.0 00:02:11.972 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:11.972 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:12.231 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:12.231 [48/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:12.231 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:12.231 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:12.231 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:12.231 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:12.231 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:12.231 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:12.231 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:12.231 [56/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:12.231 [57/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:12.231 [58/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:12.231 [59/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:12.231 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:12.231 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:12.231 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:12.489 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:12.489 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:12.489 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:12.489 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:12.489 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:12.489 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:12.489 [69/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:12.489 [70/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:12.489 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:12.489 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:12.489 [73/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:12.489 [74/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:12.747 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:12.747 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:12.747 [77/743] Generating lib/rte_eal_def with a custom command 00:02:12.747 [78/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:12.747 [79/743] Generating lib/rte_eal_mingw with a custom command 00:02:12.747 [80/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:12.747 [81/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:12.747 [82/743] Generating lib/rte_ring_def with a custom command 00:02:12.747 [83/743] Generating lib/rte_ring_mingw with a custom command 00:02:12.748 [84/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:12.748 [85/743] Generating lib/rte_rcu_def with a custom command 00:02:12.748 [86/743] Generating lib/rte_rcu_mingw with a custom command 00:02:12.748 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:12.748 [88/743] Linking static target lib/librte_ring.a 00:02:12.748 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:12.748 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:13.005 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:13.005 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:13.005 [93/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.005 [94/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:13.264 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:13.264 [96/743] Linking static target lib/librte_eal.a 00:02:13.264 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:13.522 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:13.522 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:13.522 [100/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:13.522 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:13.522 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:13.522 [103/743] Linking static target lib/librte_rcu.a 00:02:13.522 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:13.781 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:13.781 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:13.781 [107/743] Linking static target lib/librte_mempool.a 00:02:13.781 [108/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.781 [109/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:14.039 [110/743] Generating lib/rte_net_def with a custom command 00:02:14.039 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:14.039 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:14.039 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:14.039 [114/743] Generating lib/rte_meter_def with a custom command 00:02:14.039 [115/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:14.039 [116/743] Generating lib/rte_meter_mingw with a custom command 00:02:14.039 [117/743] Linking static target lib/librte_meter.a 00:02:14.039 [118/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:14.300 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:14.300 [120/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.300 [121/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:14.300 [122/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:14.560 [123/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:14.560 [124/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:14.560 [125/743] Linking static target lib/librte_net.a 00:02:14.560 [126/743] Linking static target lib/librte_mbuf.a 00:02:14.560 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.817 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.817 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:14.817 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:15.074 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:15.074 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:15.074 [133/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:15.074 [134/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.332 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:15.589 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:15.589 [137/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:15.589 [138/743] Generating lib/rte_ethdev_def with a custom command 00:02:15.589 [139/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:15.847 [140/743] Generating lib/rte_pci_def with a custom command 00:02:15.847 [141/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:15.847 [142/743] Generating lib/rte_pci_mingw with a custom command 00:02:15.847 [143/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:15.847 [144/743] Linking static target lib/librte_pci.a 00:02:15.847 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:15.847 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:15.847 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:15.847 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:15.847 [149/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:16.105 [150/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.105 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:16.105 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:16.105 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:16.105 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:16.105 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:16.105 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:16.105 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:16.105 [158/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:16.105 [159/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:16.105 [160/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:16.105 [161/743] Generating lib/rte_metrics_def with a custom command 00:02:16.105 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:02:16.363 [163/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:16.363 [164/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:16.363 [165/743] Generating lib/rte_hash_def with a custom command 00:02:16.363 [166/743] Generating lib/rte_hash_mingw with a custom command 00:02:16.363 [167/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:16.363 [168/743] Generating lib/rte_timer_def with a custom command 00:02:16.363 [169/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:16.363 [170/743] Generating lib/rte_timer_mingw with a custom command 00:02:16.363 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:16.363 [172/743] Linking static target lib/librte_cmdline.a 00:02:16.620 [173/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:16.879 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:16.879 [175/743] Linking static target lib/librte_metrics.a 00:02:16.879 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:16.879 [177/743] Linking static target lib/librte_timer.a 00:02:17.143 [178/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.143 [179/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.418 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:17.418 [181/743] Linking static target lib/librte_ethdev.a 00:02:17.418 [182/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:17.418 [183/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:17.418 [184/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.981 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:17.981 [186/743] Generating lib/rte_acl_def with a custom command 00:02:17.981 [187/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:17.981 [188/743] Generating lib/rte_acl_mingw with a custom command 00:02:17.981 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:17.981 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:17.981 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:18.238 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:18.238 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:18.495 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:18.752 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:18.752 [196/743] Linking static target lib/librte_bitratestats.a 00:02:18.752 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:19.008 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.008 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:19.008 [200/743] Linking static target lib/librte_bbdev.a 00:02:19.266 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:19.266 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:19.266 [203/743] Linking static target lib/librte_hash.a 00:02:19.524 [204/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:19.524 [205/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:19.524 [206/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.524 [207/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:19.524 [208/743] Linking static target lib/acl/libavx512_tmp.a 00:02:19.524 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:20.090 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.090 [211/743] Generating lib/rte_bpf_def with a custom command 00:02:20.090 [212/743] Generating lib/rte_bpf_mingw with a custom command 00:02:20.090 [213/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:20.090 [214/743] Generating lib/rte_cfgfile_def with a custom command 00:02:20.090 [215/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:20.347 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:20.347 [217/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:20.347 [218/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:20.347 [219/743] Linking static target lib/librte_cfgfile.a 00:02:20.347 [220/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:20.347 [221/743] Linking static target lib/librte_acl.a 00:02:20.347 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:20.605 [223/743] Generating lib/rte_compressdev_def with a custom command 00:02:20.605 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:20.605 [225/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.605 [226/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.862 [227/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:20.862 [228/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:20.862 [229/743] Generating lib/rte_cryptodev_def with a custom command 00:02:20.862 [230/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.862 [231/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:20.862 [232/743] Linking target lib/librte_eal.so.23.0 00:02:20.862 [233/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:20.862 [234/743] Linking static target lib/librte_bpf.a 00:02:21.120 [235/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:21.120 [236/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:21.120 [237/743] Linking static target lib/librte_compressdev.a 00:02:21.120 [238/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:21.120 [239/743] Linking target lib/librte_ring.so.23.0 00:02:21.120 [240/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:21.120 [241/743] Linking target lib/librte_meter.so.23.0 00:02:21.378 [242/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:21.378 [243/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.378 [244/743] Linking target lib/librte_rcu.so.23.0 00:02:21.378 [245/743] Linking target lib/librte_mempool.so.23.0 00:02:21.378 [246/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:21.378 [247/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:21.378 [248/743] Linking target lib/librte_pci.so.23.0 00:02:21.378 [249/743] Linking target lib/librte_timer.so.23.0 00:02:21.378 [250/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:21.636 [251/743] Linking target lib/librte_mbuf.so.23.0 00:02:21.636 [252/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:21.636 [253/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:21.636 [254/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:21.636 [255/743] Linking target lib/librte_acl.so.23.0 00:02:21.636 [256/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:21.636 [257/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:21.636 [258/743] Generating lib/rte_distributor_def with a custom command 00:02:21.636 [259/743] Linking target lib/librte_cfgfile.so.23.0 00:02:21.636 [260/743] Generating lib/rte_distributor_mingw with a custom command 00:02:21.636 [261/743] Generating lib/rte_efd_mingw with a custom command 00:02:21.636 [262/743] Generating lib/rte_efd_def with a custom command 00:02:21.636 [263/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:21.636 [264/743] Linking target lib/librte_net.so.23.0 00:02:21.636 [265/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:21.894 [266/743] Linking target lib/librte_bbdev.so.23.0 00:02:21.895 [267/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:21.895 [268/743] Linking target lib/librte_cmdline.so.23.0 00:02:21.895 [269/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:21.895 [270/743] Linking static target lib/librte_distributor.a 00:02:21.895 [271/743] Linking target lib/librte_hash.so.23.0 00:02:21.895 [272/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.152 [273/743] Linking target lib/librte_compressdev.so.23.0 00:02:22.152 [274/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.152 [275/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:22.152 [276/743] Linking target lib/librte_ethdev.so.23.0 00:02:22.152 [277/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.152 [278/743] Linking target lib/librte_distributor.so.23.0 00:02:22.152 [279/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:22.410 [280/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:22.410 [281/743] Linking target lib/librte_metrics.so.23.0 00:02:22.410 [282/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:22.410 [283/743] Linking target lib/librte_bitratestats.so.23.0 00:02:22.410 [284/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:22.667 [285/743] Linking target lib/librte_bpf.so.23.0 00:02:22.667 [286/743] Generating lib/rte_eventdev_def with a custom command 00:02:22.667 [287/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:22.667 [288/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:22.667 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:22.667 [290/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:22.667 [291/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:23.231 [292/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:23.231 [293/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:23.231 [294/743] Linking static target lib/librte_cryptodev.a 00:02:23.231 [295/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:23.231 [296/743] Linking static target lib/librte_efd.a 00:02:23.488 [297/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:23.488 [298/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:23.488 [299/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:23.488 [300/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.488 [301/743] Generating lib/rte_gro_def with a custom command 00:02:23.488 [302/743] Linking target lib/librte_efd.so.23.0 00:02:23.488 [303/743] Generating lib/rte_gro_mingw with a custom command 00:02:23.488 [304/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:23.488 [305/743] Linking static target lib/librte_gpudev.a 00:02:23.746 [306/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:23.746 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:24.003 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:24.259 [309/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:24.259 [310/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:24.259 [311/743] Generating lib/rte_gso_def with a custom command 00:02:24.259 [312/743] Generating lib/rte_gso_mingw with a custom command 00:02:24.518 [313/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:24.518 [314/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:24.518 [315/743] Linking static target lib/librte_gro.a 00:02:24.518 [316/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:24.518 [317/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.518 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:24.518 [319/743] Linking target lib/librte_gpudev.so.23.0 00:02:24.775 [320/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:24.775 [321/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:24.775 [322/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.775 [323/743] Linking static target lib/librte_eventdev.a 00:02:24.775 [324/743] Linking target lib/librte_gro.so.23.0 00:02:24.775 [325/743] Generating lib/rte_ip_frag_def with a custom command 00:02:24.775 [326/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:25.032 [327/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:25.032 [328/743] Linking static target lib/librte_jobstats.a 00:02:25.032 [329/743] Generating lib/rte_jobstats_def with a custom command 00:02:25.032 [330/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:25.032 [331/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:25.032 [332/743] Linking static target lib/librte_gso.a 00:02:25.290 [333/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:25.290 [334/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:25.290 [335/743] Generating lib/rte_latencystats_def with a custom command 00:02:25.290 [336/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:25.290 [337/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:25.290 [338/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:25.290 [339/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.290 [340/743] Generating lib/rte_lpm_def with a custom command 00:02:25.290 [341/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.290 [342/743] Linking target lib/librte_gso.so.23.0 00:02:25.547 [343/743] Generating lib/rte_lpm_mingw with a custom command 00:02:25.547 [344/743] Linking target lib/librte_jobstats.so.23.0 00:02:25.547 [345/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.547 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:25.547 [347/743] Linking target lib/librte_cryptodev.so.23.0 00:02:25.547 [348/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:25.547 [349/743] Linking static target lib/librte_ip_frag.a 00:02:25.821 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:25.821 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.079 [352/743] Linking target lib/librte_ip_frag.so.23.0 00:02:26.079 [353/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:26.079 [354/743] Linking static target lib/librte_latencystats.a 00:02:26.079 [355/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:26.079 [356/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:26.079 [357/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:26.079 [358/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:26.079 [359/743] Generating lib/rte_member_def with a custom command 00:02:26.079 [360/743] Generating lib/rte_member_mingw with a custom command 00:02:26.079 [361/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:26.080 [362/743] Generating lib/rte_pcapng_def with a custom command 00:02:26.337 [363/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.337 [364/743] Linking target lib/librte_latencystats.so.23.0 00:02:26.337 [365/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:26.337 [366/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:26.337 [367/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:26.337 [368/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:26.606 [369/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:26.606 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:26.606 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:26.870 [372/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:26.870 [373/743] Linking static target lib/librte_lpm.a 00:02:26.870 [374/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.870 [375/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:26.870 [376/743] Linking target lib/librte_eventdev.so.23.0 00:02:26.870 [377/743] Generating lib/rte_power_def with a custom command 00:02:26.870 [378/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:26.870 [379/743] Generating lib/rte_power_mingw with a custom command 00:02:26.870 [380/743] Generating lib/rte_rawdev_def with a custom command 00:02:27.140 [381/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:27.140 [382/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:27.140 [383/743] Generating lib/rte_regexdev_def with a custom command 00:02:27.140 [384/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:27.140 [385/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.140 [386/743] Linking target lib/librte_lpm.so.23.0 00:02:27.140 [387/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:27.140 [388/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:27.140 [389/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:27.140 [390/743] Linking static target lib/librte_pcapng.a 00:02:27.140 [391/743] Generating lib/rte_dmadev_def with a custom command 00:02:27.140 [392/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:27.401 [393/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:27.401 [394/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:27.401 [395/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:27.401 [396/743] Linking static target lib/librte_rawdev.a 00:02:27.401 [397/743] Generating lib/rte_rib_def with a custom command 00:02:27.401 [398/743] Generating lib/rte_rib_mingw with a custom command 00:02:27.401 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:27.401 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:02:27.401 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.401 [402/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:27.401 [403/743] Linking static target lib/librte_power.a 00:02:27.659 [404/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:27.659 [405/743] Linking target lib/librte_pcapng.so.23.0 00:02:27.660 [406/743] Linking static target lib/librte_dmadev.a 00:02:27.660 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:27.660 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.660 [409/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:27.918 [410/743] Linking target lib/librte_rawdev.so.23.0 00:02:27.918 [411/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:27.918 [412/743] Linking static target lib/librte_regexdev.a 00:02:27.918 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:27.918 [414/743] Generating lib/rte_sched_def with a custom command 00:02:27.918 [415/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:27.918 [416/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:27.918 [417/743] Generating lib/rte_sched_mingw with a custom command 00:02:27.918 [418/743] Linking static target lib/librte_member.a 00:02:27.918 [419/743] Generating lib/rte_security_def with a custom command 00:02:27.918 [420/743] Generating lib/rte_security_mingw with a custom command 00:02:28.177 [421/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:28.177 [422/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.177 [423/743] Linking target lib/librte_dmadev.so.23.0 00:02:28.177 [424/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:28.178 [425/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:28.178 [426/743] Generating lib/rte_stack_def with a custom command 00:02:28.178 [427/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:28.178 [428/743] Generating lib/rte_stack_mingw with a custom command 00:02:28.178 [429/743] Linking static target lib/librte_reorder.a 00:02:28.178 [430/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:28.178 [431/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:28.178 [432/743] Linking static target lib/librte_stack.a 00:02:28.178 [433/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.436 [434/743] Linking target lib/librte_member.so.23.0 00:02:28.436 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:28.436 [436/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.436 [437/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.436 [438/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.436 [439/743] Linking target lib/librte_power.so.23.0 00:02:28.436 [440/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:28.436 [441/743] Linking static target lib/librte_rib.a 00:02:28.436 [442/743] Linking target lib/librte_stack.so.23.0 00:02:28.436 [443/743] Linking target lib/librte_reorder.so.23.0 00:02:28.436 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.436 [445/743] Linking target lib/librte_regexdev.so.23.0 00:02:28.693 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:28.693 [447/743] Linking static target lib/librte_security.a 00:02:28.949 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.949 [449/743] Linking target lib/librte_rib.so.23.0 00:02:28.949 [450/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:29.206 [451/743] Generating lib/rte_vhost_def with a custom command 00:02:29.206 [452/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:29.206 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:02:29.206 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:29.206 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.206 [456/743] Linking target lib/librte_security.so.23.0 00:02:29.462 [457/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:29.462 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:29.462 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:29.462 [460/743] Linking static target lib/librte_sched.a 00:02:30.079 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.079 [462/743] Linking target lib/librte_sched.so.23.0 00:02:30.079 [463/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:30.079 [464/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:30.079 [465/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:30.079 [466/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:30.079 [467/743] Generating lib/rte_ipsec_def with a custom command 00:02:30.079 [468/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:30.079 [469/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:30.336 [470/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:30.336 [471/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:30.594 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:30.594 [473/743] Generating lib/rte_fib_def with a custom command 00:02:30.852 [474/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:30.852 [475/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:30.852 [476/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:30.852 [477/743] Generating lib/rte_fib_mingw with a custom command 00:02:30.852 [478/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:30.852 [479/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:31.111 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:31.111 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:31.111 [482/743] Linking static target lib/librte_ipsec.a 00:02:31.368 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.368 [484/743] Linking target lib/librte_ipsec.so.23.0 00:02:31.368 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:31.625 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:31.625 [487/743] Linking static target lib/librte_fib.a 00:02:31.625 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:31.625 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:31.625 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:31.881 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:31.881 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.881 [493/743] Linking target lib/librte_fib.so.23.0 00:02:32.139 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:32.397 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:32.397 [496/743] Generating lib/rte_port_def with a custom command 00:02:32.654 [497/743] Generating lib/rte_port_mingw with a custom command 00:02:32.654 [498/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:32.654 [499/743] Generating lib/rte_pdump_def with a custom command 00:02:32.654 [500/743] Generating lib/rte_pdump_mingw with a custom command 00:02:32.654 [501/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:32.654 [502/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:32.912 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:32.912 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:32.912 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:32.912 [506/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:33.169 [507/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:33.169 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:33.169 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:33.169 [510/743] Linking static target lib/librte_port.a 00:02:33.427 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:33.685 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:33.685 [513/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:33.685 [514/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:33.685 [515/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.685 [516/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:33.685 [517/743] Linking static target lib/librte_pdump.a 00:02:33.943 [518/743] Linking target lib/librte_port.so.23.0 00:02:33.943 [519/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:33.943 [520/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:34.201 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.201 [522/743] Linking target lib/librte_pdump.so.23.0 00:02:34.460 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:34.460 [524/743] Generating lib/rte_table_def with a custom command 00:02:34.460 [525/743] Generating lib/rte_table_mingw with a custom command 00:02:34.460 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:34.718 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:34.718 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:34.718 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:34.718 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:34.976 [531/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:34.976 [532/743] Generating lib/rte_pipeline_def with a custom command 00:02:34.976 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:02:34.976 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:35.234 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:35.234 [536/743] Linking static target lib/librte_table.a 00:02:35.234 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:35.491 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:35.749 [539/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:35.749 [540/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:35.749 [541/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.749 [542/743] Linking target lib/librte_table.so.23.0 00:02:35.749 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:36.006 [544/743] Generating lib/rte_graph_def with a custom command 00:02:36.006 [545/743] Generating lib/rte_graph_mingw with a custom command 00:02:36.006 [546/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:36.263 [547/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:36.263 [548/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:36.521 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:36.521 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:36.521 [551/743] Linking static target lib/librte_graph.a 00:02:36.521 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:36.779 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:36.779 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:37.034 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:37.291 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:37.291 [557/743] Generating lib/rte_node_def with a custom command 00:02:37.291 [558/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:37.291 [559/743] Generating lib/rte_node_mingw with a custom command 00:02:37.291 [560/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.291 [561/743] Linking target lib/librte_graph.so.23.0 00:02:37.291 [562/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:37.549 [563/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:37.549 [564/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:37.549 [565/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:37.549 [566/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:37.549 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:02:37.549 [568/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:37.549 [569/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:37.549 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:37.807 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:02:37.807 [572/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:37.807 [573/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:37.807 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:02:37.807 [575/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:37.807 [576/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:37.807 [577/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:37.807 [578/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:37.807 [579/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:37.807 [580/743] Linking static target lib/librte_node.a 00:02:37.807 [581/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:38.080 [582/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:38.080 [583/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:38.080 [584/743] Linking static target drivers/librte_bus_vdev.a 00:02:38.080 [585/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.080 [586/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:38.080 [587/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:38.080 [588/743] Linking target lib/librte_node.so.23.0 00:02:38.340 [589/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:38.340 [590/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.340 [591/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:38.340 [592/743] Linking target drivers/librte_bus_vdev.so.23.0 00:02:38.340 [593/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:38.340 [594/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:38.340 [595/743] Linking static target drivers/librte_bus_pci.a 00:02:38.600 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:38.858 [597/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.858 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:38.858 [599/743] Linking target drivers/librte_bus_pci.so.23.0 00:02:38.858 [600/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:38.858 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:38.858 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:39.116 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:39.116 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:39.116 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:39.116 [606/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:39.116 [607/743] Linking static target drivers/librte_mempool_ring.a 00:02:39.116 [608/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:39.116 [609/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:39.374 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:02:39.940 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:40.200 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:40.200 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:40.200 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:40.488 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:40.746 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:40.746 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:41.312 [618/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:41.312 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:41.570 [620/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:41.570 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:41.570 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:02:41.570 [623/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:41.570 [624/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:41.827 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:42.391 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:42.958 [627/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:42.958 [628/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:43.216 [629/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:43.216 [630/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:43.216 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:43.216 [632/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:43.216 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:43.216 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:43.474 [635/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:43.474 [636/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:44.040 [637/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:44.040 [638/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:44.040 [639/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:44.297 [640/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:44.297 [641/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:44.297 [642/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:44.297 [643/743] Linking static target drivers/librte_net_i40e.a 00:02:44.297 [644/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:44.553 [645/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:44.553 [646/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:44.553 [647/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:44.553 [648/743] Linking static target lib/librte_vhost.a 00:02:44.811 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:44.811 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:45.103 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:45.103 [652/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.103 [653/743] Linking target drivers/librte_net_i40e.so.23.0 00:02:45.103 [654/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:45.360 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:45.617 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:45.618 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:45.875 [658/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.132 [659/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:46.132 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:46.132 [661/743] Linking target lib/librte_vhost.so.23.0 00:02:46.132 [662/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:46.132 [663/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:46.132 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:46.132 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:46.389 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:46.645 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:46.645 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:46.645 [669/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:46.903 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:47.160 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:47.160 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:47.160 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:47.721 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:47.979 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:47.979 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:47.979 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:48.240 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:48.240 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:48.240 [680/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:48.499 [681/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:48.756 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:48.756 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:48.756 [684/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:48.756 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:49.015 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:49.015 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:49.015 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:49.579 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:49.579 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:49.579 [691/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:49.579 [692/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:49.579 [693/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:49.579 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:49.837 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:50.094 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:50.094 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:50.351 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:50.608 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:50.865 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:51.123 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:51.123 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:51.381 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:51.381 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:51.381 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:51.948 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:52.206 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:52.206 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:52.206 [709/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:52.463 [710/743] Linking static target lib/librte_pipeline.a 00:02:52.463 [711/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:53.028 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:53.028 [713/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:53.028 [714/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:53.028 [715/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:53.286 [716/743] Linking target app/dpdk-dumpcap 00:02:53.286 [717/743] Linking target app/dpdk-proc-info 00:02:53.543 [718/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:53.543 [719/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:53.543 [720/743] Linking target app/dpdk-test-bbdev 00:02:53.801 [721/743] Linking target app/dpdk-pdump 00:02:53.802 [722/743] Linking target app/dpdk-test-acl 00:02:53.802 [723/743] Linking target app/dpdk-test-cmdline 00:02:53.802 [724/743] Linking target app/dpdk-test-compress-perf 00:02:53.802 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:53.802 [726/743] Linking target app/dpdk-test-crypto-perf 00:02:54.059 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:54.060 [728/743] Linking target app/dpdk-test-eventdev 00:02:54.317 [729/743] Linking target app/dpdk-test-fib 00:02:54.317 [730/743] Linking target app/dpdk-test-pipeline 00:02:54.317 [731/743] Linking target app/dpdk-test-flow-perf 00:02:54.575 [732/743] Linking target app/dpdk-test-gpudev 00:02:54.832 [733/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:54.833 [734/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:55.090 [735/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:55.348 [736/743] Linking target app/dpdk-test-sad 00:02:55.348 [737/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:55.606 [738/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:55.606 [739/743] Linking target app/dpdk-testpmd 00:02:55.606 [740/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.606 [741/743] Linking target lib/librte_pipeline.so.23.0 00:02:55.864 [742/743] Linking target app/dpdk-test-security-perf 00:02:55.864 [743/743] Linking target app/dpdk-test-regex 00:02:55.864 13:01:52 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:56.122 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:56.122 [0/1] Installing files. 00:02:56.383 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:56.383 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.384 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.385 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.386 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:56.387 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:56.388 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:56.390 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:56.390 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:56.390 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.390 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.390 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.390 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.390 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.390 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.390 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.390 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.390 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.390 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.390 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.390 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.390 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.390 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.390 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.648 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:56.649 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:56.649 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:56.649 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.649 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:56.649 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.649 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.649 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.649 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.649 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.649 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.649 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.649 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.649 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.909 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.909 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.909 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.909 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.909 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.909 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.909 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.909 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.909 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.909 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.909 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.909 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.910 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.911 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:56.912 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:56.912 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:56.912 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:56.912 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:56.912 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:56.912 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:56.912 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:56.912 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:56.912 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:56.912 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:56.912 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:56.912 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:56.912 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:56.912 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:56.912 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:56.912 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:56.912 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:56.912 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:56.912 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:56.912 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:56.912 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:56.912 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:56.912 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:56.912 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:56.912 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:56.912 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:56.912 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:56.912 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:56.912 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:56.912 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:56.912 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:56.912 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:56.913 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:56.913 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:56.913 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:56.913 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:56.913 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:56.913 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:56.913 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:56.913 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:56.913 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:56.913 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:56.913 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:56.913 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:56.913 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:56.913 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:56.913 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:56.913 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:56.913 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:56.913 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:56.913 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:56.913 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:56.913 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:56.913 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:56.913 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:56.913 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:56.913 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:56.913 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:56.913 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:56.913 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:56.913 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:56.913 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:56.913 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:56.913 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:56.913 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:56.913 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:56.913 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:56.913 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:56.913 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:56.913 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:56.913 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:56.913 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:56.913 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:56.913 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:56.913 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:56.913 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:56.913 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:56.913 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:56.913 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:56.913 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:56.913 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:56.913 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:56.913 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:56.913 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:56.913 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:56.913 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:56.913 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:56.913 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:56.913 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:56.913 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:56.913 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:56.913 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:56.913 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:56.913 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:56.913 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:56.913 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:56.913 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:56.913 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:56.913 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:56.913 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:56.913 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:56.913 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:56.913 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:56.913 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:56.913 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:56.913 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:56.913 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:56.913 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:56.913 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:56.913 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:56.913 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:56.913 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:56.913 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:56.913 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:56.913 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:56.913 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:56.913 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:56.913 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:56.913 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:56.913 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:56.913 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:56.913 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:56.913 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:56.913 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:56.913 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:56.913 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:56.913 13:01:53 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:02:56.913 13:01:53 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:56.913 13:01:53 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:02:56.913 13:01:53 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:56.913 00:02:56.913 real 0m52.584s 00:02:56.913 user 6m14.766s 00:02:56.913 sys 1m1.626s 00:02:56.913 ************************************ 00:02:56.913 END TEST build_native_dpdk 00:02:56.913 ************************************ 00:02:56.913 13:01:53 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:56.913 13:01:53 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:56.913 13:01:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:56.913 13:01:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:56.913 13:01:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:56.913 13:01:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:56.913 13:01:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:56.913 13:01:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:56.913 13:01:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:56.913 13:01:53 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:02:57.171 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:57.171 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.171 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:57.171 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:57.736 Using 'verbs' RDMA provider 00:03:10.860 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:23.049 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:23.049 go version go1.21.1 linux/amd64 00:03:23.306 Creating mk/config.mk...done. 00:03:23.306 Creating mk/cc.flags.mk...done. 00:03:23.306 Type 'make' to build. 00:03:23.306 13:02:20 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:23.306 13:02:20 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:23.306 13:02:20 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:23.306 13:02:20 -- common/autotest_common.sh@10 -- $ set +x 00:03:23.306 ************************************ 00:03:23.306 START TEST make 00:03:23.306 ************************************ 00:03:23.306 13:02:20 make -- common/autotest_common.sh@1121 -- $ make -j10 00:03:23.871 make[1]: Nothing to be done for 'all'. 00:03:50.464 CC lib/ut_mock/mock.o 00:03:50.464 CC lib/log/log.o 00:03:50.464 CC lib/ut/ut.o 00:03:50.464 CC lib/log/log_flags.o 00:03:50.464 CC lib/log/log_deprecated.o 00:03:50.464 LIB libspdk_log.a 00:03:50.464 LIB libspdk_ut.a 00:03:50.464 LIB libspdk_ut_mock.a 00:03:50.464 SO libspdk_log.so.7.0 00:03:50.464 SO libspdk_ut.so.2.0 00:03:50.464 SO libspdk_ut_mock.so.6.0 00:03:50.464 SYMLINK libspdk_ut.so 00:03:50.464 SYMLINK libspdk_log.so 00:03:50.464 SYMLINK libspdk_ut_mock.so 00:03:50.464 CC lib/util/base64.o 00:03:50.464 CC lib/ioat/ioat.o 00:03:50.464 CC lib/util/cpuset.o 00:03:50.464 CXX lib/trace_parser/trace.o 00:03:50.464 CC lib/util/bit_array.o 00:03:50.464 CC lib/util/crc16.o 00:03:50.464 CC lib/util/crc32.o 00:03:50.464 CC lib/dma/dma.o 00:03:50.464 CC lib/util/crc32c.o 00:03:50.464 CC lib/vfio_user/host/vfio_user_pci.o 00:03:50.464 CC lib/util/crc32_ieee.o 00:03:50.464 CC lib/util/crc64.o 00:03:50.464 CC lib/util/dif.o 00:03:50.464 CC lib/util/fd.o 00:03:50.464 CC lib/util/file.o 00:03:50.464 LIB libspdk_dma.a 00:03:50.464 CC lib/util/hexlify.o 00:03:50.464 SO libspdk_dma.so.4.0 00:03:50.464 CC lib/vfio_user/host/vfio_user.o 00:03:50.464 CC lib/util/iov.o 00:03:50.464 CC lib/util/math.o 00:03:50.464 SYMLINK libspdk_dma.so 00:03:50.464 CC lib/util/pipe.o 00:03:50.464 CC lib/util/strerror_tls.o 00:03:50.464 LIB libspdk_ioat.a 00:03:50.464 SO libspdk_ioat.so.7.0 00:03:50.464 CC lib/util/string.o 00:03:50.464 CC lib/util/uuid.o 00:03:50.464 SYMLINK libspdk_ioat.so 00:03:50.464 CC lib/util/fd_group.o 00:03:50.464 CC lib/util/xor.o 00:03:50.464 CC lib/util/zipf.o 00:03:50.464 LIB libspdk_vfio_user.a 00:03:50.464 SO libspdk_vfio_user.so.5.0 00:03:50.742 SYMLINK libspdk_vfio_user.so 00:03:50.742 LIB libspdk_util.a 00:03:50.742 SO libspdk_util.so.9.0 00:03:51.000 LIB libspdk_trace_parser.a 00:03:51.000 SYMLINK libspdk_util.so 00:03:51.000 SO libspdk_trace_parser.so.5.0 00:03:51.000 SYMLINK libspdk_trace_parser.so 00:03:51.000 CC lib/rdma/common.o 00:03:51.000 CC lib/vmd/vmd.o 00:03:51.000 CC lib/rdma/rdma_verbs.o 00:03:51.000 CC lib/idxd/idxd.o 00:03:51.000 CC lib/vmd/led.o 00:03:51.000 CC lib/idxd/idxd_user.o 00:03:51.000 CC lib/idxd/idxd_kernel.o 00:03:51.000 CC lib/json/json_parse.o 00:03:51.000 CC lib/env_dpdk/env.o 00:03:51.000 CC lib/conf/conf.o 00:03:51.260 CC lib/json/json_util.o 00:03:51.260 CC lib/json/json_write.o 00:03:51.260 CC lib/env_dpdk/memory.o 00:03:51.260 CC lib/env_dpdk/pci.o 00:03:51.260 LIB libspdk_conf.a 00:03:51.260 CC lib/env_dpdk/init.o 00:03:51.260 SO libspdk_conf.so.6.0 00:03:51.518 LIB libspdk_rdma.a 00:03:51.518 SO libspdk_rdma.so.6.0 00:03:51.518 SYMLINK libspdk_conf.so 00:03:51.518 CC lib/env_dpdk/threads.o 00:03:51.518 SYMLINK libspdk_rdma.so 00:03:51.518 CC lib/env_dpdk/pci_ioat.o 00:03:51.518 CC lib/env_dpdk/pci_virtio.o 00:03:51.518 LIB libspdk_json.a 00:03:51.518 SO libspdk_json.so.6.0 00:03:51.518 CC lib/env_dpdk/pci_vmd.o 00:03:51.518 CC lib/env_dpdk/pci_idxd.o 00:03:51.518 LIB libspdk_idxd.a 00:03:51.777 CC lib/env_dpdk/pci_event.o 00:03:51.777 CC lib/env_dpdk/sigbus_handler.o 00:03:51.777 SYMLINK libspdk_json.so 00:03:51.777 SO libspdk_idxd.so.12.0 00:03:51.777 CC lib/env_dpdk/pci_dpdk.o 00:03:51.777 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:51.777 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:51.777 LIB libspdk_vmd.a 00:03:51.777 SO libspdk_vmd.so.6.0 00:03:51.777 SYMLINK libspdk_idxd.so 00:03:51.777 SYMLINK libspdk_vmd.so 00:03:52.034 CC lib/jsonrpc/jsonrpc_server.o 00:03:52.034 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:52.034 CC lib/jsonrpc/jsonrpc_client.o 00:03:52.034 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:52.292 LIB libspdk_jsonrpc.a 00:03:52.292 SO libspdk_jsonrpc.so.6.0 00:03:52.292 SYMLINK libspdk_jsonrpc.so 00:03:52.550 LIB libspdk_env_dpdk.a 00:03:52.550 SO libspdk_env_dpdk.so.14.0 00:03:52.550 CC lib/rpc/rpc.o 00:03:52.822 SYMLINK libspdk_env_dpdk.so 00:03:52.822 LIB libspdk_rpc.a 00:03:52.822 SO libspdk_rpc.so.6.0 00:03:53.079 SYMLINK libspdk_rpc.so 00:03:53.336 CC lib/trace/trace.o 00:03:53.336 CC lib/trace/trace_flags.o 00:03:53.336 CC lib/trace/trace_rpc.o 00:03:53.336 CC lib/keyring/keyring.o 00:03:53.336 CC lib/keyring/keyring_rpc.o 00:03:53.336 CC lib/notify/notify_rpc.o 00:03:53.336 CC lib/notify/notify.o 00:03:53.336 LIB libspdk_notify.a 00:03:53.593 LIB libspdk_keyring.a 00:03:53.593 SO libspdk_notify.so.6.0 00:03:53.593 LIB libspdk_trace.a 00:03:53.593 SO libspdk_keyring.so.1.0 00:03:53.593 SO libspdk_trace.so.10.0 00:03:53.593 SYMLINK libspdk_notify.so 00:03:53.593 SYMLINK libspdk_keyring.so 00:03:53.593 SYMLINK libspdk_trace.so 00:03:53.851 CC lib/sock/sock.o 00:03:53.851 CC lib/sock/sock_rpc.o 00:03:53.851 CC lib/thread/thread.o 00:03:53.851 CC lib/thread/iobuf.o 00:03:54.415 LIB libspdk_sock.a 00:03:54.415 SO libspdk_sock.so.9.0 00:03:54.415 SYMLINK libspdk_sock.so 00:03:54.672 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:54.672 CC lib/nvme/nvme_ctrlr.o 00:03:54.672 CC lib/nvme/nvme_ns_cmd.o 00:03:54.672 CC lib/nvme/nvme_fabric.o 00:03:54.672 CC lib/nvme/nvme_pcie_common.o 00:03:54.672 CC lib/nvme/nvme_pcie.o 00:03:54.672 CC lib/nvme/nvme_qpair.o 00:03:54.672 CC lib/nvme/nvme_ns.o 00:03:54.672 CC lib/nvme/nvme.o 00:03:55.605 CC lib/nvme/nvme_quirks.o 00:03:55.605 CC lib/nvme/nvme_transport.o 00:03:55.605 LIB libspdk_thread.a 00:03:55.605 SO libspdk_thread.so.10.0 00:03:55.605 CC lib/nvme/nvme_discovery.o 00:03:55.605 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:55.605 SYMLINK libspdk_thread.so 00:03:55.605 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:55.605 CC lib/nvme/nvme_tcp.o 00:03:55.605 CC lib/nvme/nvme_opal.o 00:03:55.864 CC lib/nvme/nvme_io_msg.o 00:03:55.864 CC lib/accel/accel.o 00:03:56.122 CC lib/nvme/nvme_poll_group.o 00:03:56.122 CC lib/nvme/nvme_zns.o 00:03:56.122 CC lib/nvme/nvme_stubs.o 00:03:56.122 CC lib/accel/accel_rpc.o 00:03:56.122 CC lib/nvme/nvme_auth.o 00:03:56.380 CC lib/accel/accel_sw.o 00:03:56.380 CC lib/blob/blobstore.o 00:03:56.639 CC lib/init/json_config.o 00:03:56.639 CC lib/nvme/nvme_cuse.o 00:03:56.639 CC lib/nvme/nvme_rdma.o 00:03:56.897 LIB libspdk_accel.a 00:03:56.897 CC lib/init/subsystem.o 00:03:56.897 CC lib/blob/request.o 00:03:56.897 SO libspdk_accel.so.15.0 00:03:56.897 CC lib/blob/zeroes.o 00:03:56.897 CC lib/virtio/virtio.o 00:03:56.897 SYMLINK libspdk_accel.so 00:03:56.897 CC lib/init/subsystem_rpc.o 00:03:57.154 CC lib/blob/blob_bs_dev.o 00:03:57.154 CC lib/bdev/bdev.o 00:03:57.154 CC lib/init/rpc.o 00:03:57.154 CC lib/virtio/virtio_vhost_user.o 00:03:57.154 CC lib/bdev/bdev_rpc.o 00:03:57.154 CC lib/virtio/virtio_vfio_user.o 00:03:57.154 CC lib/bdev/bdev_zone.o 00:03:57.412 CC lib/bdev/part.o 00:03:57.412 LIB libspdk_init.a 00:03:57.412 SO libspdk_init.so.5.0 00:03:57.412 SYMLINK libspdk_init.so 00:03:57.412 CC lib/bdev/scsi_nvme.o 00:03:57.412 CC lib/virtio/virtio_pci.o 00:03:57.669 CC lib/event/reactor.o 00:03:57.669 CC lib/event/app.o 00:03:57.669 CC lib/event/app_rpc.o 00:03:57.669 CC lib/event/log_rpc.o 00:03:57.669 CC lib/event/scheduler_static.o 00:03:57.669 LIB libspdk_virtio.a 00:03:57.927 SO libspdk_virtio.so.7.0 00:03:57.927 SYMLINK libspdk_virtio.so 00:03:58.183 LIB libspdk_event.a 00:03:58.183 SO libspdk_event.so.13.0 00:03:58.183 LIB libspdk_nvme.a 00:03:58.183 SYMLINK libspdk_event.so 00:03:58.441 SO libspdk_nvme.so.13.0 00:03:58.698 SYMLINK libspdk_nvme.so 00:03:59.631 LIB libspdk_blob.a 00:03:59.631 SO libspdk_blob.so.11.0 00:03:59.631 SYMLINK libspdk_blob.so 00:03:59.889 LIB libspdk_bdev.a 00:03:59.889 CC lib/blobfs/blobfs.o 00:03:59.889 CC lib/blobfs/tree.o 00:03:59.889 CC lib/lvol/lvol.o 00:03:59.889 SO libspdk_bdev.so.15.0 00:04:00.146 SYMLINK libspdk_bdev.so 00:04:00.404 CC lib/ublk/ublk.o 00:04:00.404 CC lib/ublk/ublk_rpc.o 00:04:00.404 CC lib/nvmf/ctrlr_discovery.o 00:04:00.404 CC lib/nvmf/ctrlr.o 00:04:00.404 CC lib/nvmf/ctrlr_bdev.o 00:04:00.404 CC lib/ftl/ftl_core.o 00:04:00.404 CC lib/scsi/dev.o 00:04:00.404 CC lib/nbd/nbd.o 00:04:00.404 CC lib/nbd/nbd_rpc.o 00:04:00.661 CC lib/scsi/lun.o 00:04:00.661 CC lib/nvmf/subsystem.o 00:04:00.661 LIB libspdk_nbd.a 00:04:00.661 SO libspdk_nbd.so.7.0 00:04:00.661 CC lib/ftl/ftl_init.o 00:04:00.661 CC lib/ftl/ftl_layout.o 00:04:00.661 LIB libspdk_blobfs.a 00:04:00.918 SYMLINK libspdk_nbd.so 00:04:00.918 SO libspdk_blobfs.so.10.0 00:04:00.918 CC lib/nvmf/nvmf.o 00:04:00.918 SYMLINK libspdk_blobfs.so 00:04:00.918 CC lib/nvmf/nvmf_rpc.o 00:04:00.918 LIB libspdk_lvol.a 00:04:00.919 CC lib/scsi/port.o 00:04:00.919 LIB libspdk_ublk.a 00:04:00.919 SO libspdk_lvol.so.10.0 00:04:00.919 SO libspdk_ublk.so.3.0 00:04:00.919 CC lib/scsi/scsi.o 00:04:00.919 SYMLINK libspdk_lvol.so 00:04:00.919 CC lib/scsi/scsi_bdev.o 00:04:01.176 SYMLINK libspdk_ublk.so 00:04:01.176 CC lib/ftl/ftl_debug.o 00:04:01.176 CC lib/nvmf/transport.o 00:04:01.176 CC lib/ftl/ftl_io.o 00:04:01.176 CC lib/ftl/ftl_sb.o 00:04:01.176 CC lib/scsi/scsi_pr.o 00:04:01.432 CC lib/ftl/ftl_l2p.o 00:04:01.432 CC lib/ftl/ftl_l2p_flat.o 00:04:01.432 CC lib/nvmf/tcp.o 00:04:01.432 CC lib/nvmf/stubs.o 00:04:01.432 CC lib/scsi/scsi_rpc.o 00:04:01.432 CC lib/nvmf/mdns_server.o 00:04:01.690 CC lib/ftl/ftl_nv_cache.o 00:04:01.690 CC lib/scsi/task.o 00:04:01.690 CC lib/ftl/ftl_band.o 00:04:01.690 CC lib/nvmf/rdma.o 00:04:01.690 CC lib/nvmf/auth.o 00:04:01.690 CC lib/ftl/ftl_band_ops.o 00:04:01.949 LIB libspdk_scsi.a 00:04:01.949 CC lib/ftl/ftl_writer.o 00:04:01.949 SO libspdk_scsi.so.9.0 00:04:01.949 CC lib/ftl/ftl_rq.o 00:04:01.949 CC lib/ftl/ftl_reloc.o 00:04:01.949 SYMLINK libspdk_scsi.so 00:04:01.949 CC lib/ftl/ftl_l2p_cache.o 00:04:02.206 CC lib/ftl/ftl_p2l.o 00:04:02.206 CC lib/ftl/mngt/ftl_mngt.o 00:04:02.206 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:02.206 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:02.464 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:02.464 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:02.464 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:02.721 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:02.721 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:02.721 CC lib/iscsi/conn.o 00:04:02.721 CC lib/vhost/vhost.o 00:04:02.721 CC lib/vhost/vhost_rpc.o 00:04:02.721 CC lib/vhost/vhost_scsi.o 00:04:02.721 CC lib/iscsi/init_grp.o 00:04:02.721 CC lib/iscsi/iscsi.o 00:04:02.979 CC lib/vhost/vhost_blk.o 00:04:02.979 CC lib/vhost/rte_vhost_user.o 00:04:02.979 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:03.237 CC lib/iscsi/md5.o 00:04:03.494 CC lib/iscsi/param.o 00:04:03.494 CC lib/iscsi/portal_grp.o 00:04:03.494 CC lib/iscsi/tgt_node.o 00:04:03.494 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:03.752 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:03.752 LIB libspdk_nvmf.a 00:04:03.752 CC lib/iscsi/iscsi_subsystem.o 00:04:03.752 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:03.752 CC lib/iscsi/iscsi_rpc.o 00:04:03.752 SO libspdk_nvmf.so.18.0 00:04:04.009 CC lib/iscsi/task.o 00:04:04.009 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:04.009 CC lib/ftl/utils/ftl_conf.o 00:04:04.009 SYMLINK libspdk_nvmf.so 00:04:04.009 CC lib/ftl/utils/ftl_md.o 00:04:04.009 CC lib/ftl/utils/ftl_mempool.o 00:04:04.267 CC lib/ftl/utils/ftl_bitmap.o 00:04:04.267 CC lib/ftl/utils/ftl_property.o 00:04:04.267 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:04.267 LIB libspdk_vhost.a 00:04:04.267 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:04.267 LIB libspdk_iscsi.a 00:04:04.267 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:04.267 SO libspdk_vhost.so.8.0 00:04:04.267 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:04.267 SO libspdk_iscsi.so.8.0 00:04:04.525 SYMLINK libspdk_vhost.so 00:04:04.525 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:04.525 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:04.525 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:04.525 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:04.525 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:04.525 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:04.525 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:04.525 SYMLINK libspdk_iscsi.so 00:04:04.525 CC lib/ftl/base/ftl_base_dev.o 00:04:04.525 CC lib/ftl/base/ftl_base_bdev.o 00:04:04.525 CC lib/ftl/ftl_trace.o 00:04:04.782 LIB libspdk_ftl.a 00:04:05.348 SO libspdk_ftl.so.9.0 00:04:05.606 SYMLINK libspdk_ftl.so 00:04:05.864 CC module/env_dpdk/env_dpdk_rpc.o 00:04:05.864 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:06.122 CC module/blob/bdev/blob_bdev.o 00:04:06.122 CC module/accel/ioat/accel_ioat.o 00:04:06.122 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:06.122 CC module/accel/iaa/accel_iaa.o 00:04:06.122 CC module/keyring/file/keyring.o 00:04:06.122 CC module/accel/error/accel_error.o 00:04:06.122 CC module/sock/posix/posix.o 00:04:06.122 CC module/accel/dsa/accel_dsa.o 00:04:06.122 LIB libspdk_env_dpdk_rpc.a 00:04:06.122 LIB libspdk_scheduler_dpdk_governor.a 00:04:06.122 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:06.122 CC module/keyring/file/keyring_rpc.o 00:04:06.122 SO libspdk_env_dpdk_rpc.so.6.0 00:04:06.122 LIB libspdk_scheduler_dynamic.a 00:04:06.122 CC module/accel/ioat/accel_ioat_rpc.o 00:04:06.122 CC module/accel/iaa/accel_iaa_rpc.o 00:04:06.380 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:06.380 CC module/accel/error/accel_error_rpc.o 00:04:06.380 SO libspdk_scheduler_dynamic.so.4.0 00:04:06.380 SYMLINK libspdk_env_dpdk_rpc.so 00:04:06.380 SYMLINK libspdk_scheduler_dynamic.so 00:04:06.380 CC module/accel/dsa/accel_dsa_rpc.o 00:04:06.380 LIB libspdk_blob_bdev.a 00:04:06.380 LIB libspdk_accel_ioat.a 00:04:06.380 SO libspdk_blob_bdev.so.11.0 00:04:06.380 LIB libspdk_accel_iaa.a 00:04:06.380 SO libspdk_accel_ioat.so.6.0 00:04:06.380 LIB libspdk_accel_error.a 00:04:06.380 SYMLINK libspdk_blob_bdev.so 00:04:06.380 LIB libspdk_keyring_file.a 00:04:06.380 SO libspdk_accel_iaa.so.3.0 00:04:06.380 SO libspdk_accel_error.so.2.0 00:04:06.380 CC module/scheduler/gscheduler/gscheduler.o 00:04:06.380 LIB libspdk_accel_dsa.a 00:04:06.380 SYMLINK libspdk_accel_ioat.so 00:04:06.380 CC module/keyring/linux/keyring.o 00:04:06.380 CC module/keyring/linux/keyring_rpc.o 00:04:06.380 SO libspdk_keyring_file.so.1.0 00:04:06.380 SYMLINK libspdk_accel_iaa.so 00:04:06.380 SO libspdk_accel_dsa.so.5.0 00:04:06.638 SYMLINK libspdk_accel_error.so 00:04:06.638 SYMLINK libspdk_keyring_file.so 00:04:06.638 SYMLINK libspdk_accel_dsa.so 00:04:06.638 LIB libspdk_scheduler_gscheduler.a 00:04:06.638 LIB libspdk_keyring_linux.a 00:04:06.638 SO libspdk_scheduler_gscheduler.so.4.0 00:04:06.638 SO libspdk_keyring_linux.so.1.0 00:04:06.638 SYMLINK libspdk_scheduler_gscheduler.so 00:04:06.638 CC module/bdev/delay/vbdev_delay.o 00:04:06.638 CC module/bdev/error/vbdev_error.o 00:04:06.638 CC module/bdev/gpt/gpt.o 00:04:06.896 CC module/bdev/lvol/vbdev_lvol.o 00:04:06.896 CC module/blobfs/bdev/blobfs_bdev.o 00:04:06.896 CC module/bdev/malloc/bdev_malloc.o 00:04:06.896 SYMLINK libspdk_keyring_linux.so 00:04:06.896 CC module/bdev/gpt/vbdev_gpt.o 00:04:06.896 CC module/bdev/null/bdev_null.o 00:04:06.896 CC module/bdev/nvme/bdev_nvme.o 00:04:06.896 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:06.896 LIB libspdk_sock_posix.a 00:04:07.154 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:07.154 SO libspdk_sock_posix.so.6.0 00:04:07.154 CC module/bdev/error/vbdev_error_rpc.o 00:04:07.154 SYMLINK libspdk_sock_posix.so 00:04:07.154 CC module/bdev/null/bdev_null_rpc.o 00:04:07.154 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:07.154 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:07.154 LIB libspdk_bdev_gpt.a 00:04:07.154 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:07.154 SO libspdk_bdev_gpt.so.6.0 00:04:07.411 LIB libspdk_bdev_error.a 00:04:07.411 LIB libspdk_blobfs_bdev.a 00:04:07.411 LIB libspdk_bdev_null.a 00:04:07.411 SO libspdk_bdev_error.so.6.0 00:04:07.411 SYMLINK libspdk_bdev_gpt.so 00:04:07.411 LIB libspdk_bdev_delay.a 00:04:07.411 SO libspdk_blobfs_bdev.so.6.0 00:04:07.411 LIB libspdk_bdev_malloc.a 00:04:07.411 SO libspdk_bdev_null.so.6.0 00:04:07.411 SO libspdk_bdev_delay.so.6.0 00:04:07.411 SYMLINK libspdk_bdev_error.so 00:04:07.411 SO libspdk_bdev_malloc.so.6.0 00:04:07.411 SYMLINK libspdk_bdev_null.so 00:04:07.411 SYMLINK libspdk_blobfs_bdev.so 00:04:07.411 SYMLINK libspdk_bdev_malloc.so 00:04:07.411 SYMLINK libspdk_bdev_delay.so 00:04:07.411 CC module/bdev/nvme/nvme_rpc.o 00:04:07.669 CC module/bdev/passthru/vbdev_passthru.o 00:04:07.669 CC module/bdev/raid/bdev_raid.o 00:04:07.669 CC module/bdev/split/vbdev_split.o 00:04:07.669 LIB libspdk_bdev_lvol.a 00:04:07.669 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:07.669 SO libspdk_bdev_lvol.so.6.0 00:04:07.669 CC module/bdev/aio/bdev_aio.o 00:04:07.669 CC module/bdev/ftl/bdev_ftl.o 00:04:07.669 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:07.669 CC module/bdev/split/vbdev_split_rpc.o 00:04:07.669 SYMLINK libspdk_bdev_lvol.so 00:04:07.669 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:07.927 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:07.927 CC module/bdev/raid/bdev_raid_rpc.o 00:04:07.927 LIB libspdk_bdev_passthru.a 00:04:07.927 SO libspdk_bdev_passthru.so.6.0 00:04:07.927 CC module/bdev/nvme/bdev_mdns_client.o 00:04:07.927 LIB libspdk_bdev_split.a 00:04:08.184 SYMLINK libspdk_bdev_passthru.so 00:04:08.185 CC module/bdev/aio/bdev_aio_rpc.o 00:04:08.185 LIB libspdk_bdev_ftl.a 00:04:08.185 SO libspdk_bdev_split.so.6.0 00:04:08.185 CC module/bdev/iscsi/bdev_iscsi.o 00:04:08.185 SO libspdk_bdev_ftl.so.6.0 00:04:08.185 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:08.185 CC module/bdev/nvme/vbdev_opal.o 00:04:08.185 LIB libspdk_bdev_zone_block.a 00:04:08.185 SYMLINK libspdk_bdev_ftl.so 00:04:08.185 SYMLINK libspdk_bdev_split.so 00:04:08.185 CC module/bdev/raid/bdev_raid_sb.o 00:04:08.185 SO libspdk_bdev_zone_block.so.6.0 00:04:08.185 LIB libspdk_bdev_aio.a 00:04:08.185 SO libspdk_bdev_aio.so.6.0 00:04:08.185 SYMLINK libspdk_bdev_zone_block.so 00:04:08.443 CC module/bdev/raid/raid0.o 00:04:08.443 SYMLINK libspdk_bdev_aio.so 00:04:08.443 CC module/bdev/raid/raid1.o 00:04:08.443 CC module/bdev/raid/concat.o 00:04:08.443 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:08.443 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:08.443 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:08.443 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:08.443 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:08.701 LIB libspdk_bdev_iscsi.a 00:04:08.701 LIB libspdk_bdev_raid.a 00:04:08.701 SO libspdk_bdev_iscsi.so.6.0 00:04:08.701 SO libspdk_bdev_raid.so.6.0 00:04:08.701 SYMLINK libspdk_bdev_iscsi.so 00:04:08.701 SYMLINK libspdk_bdev_raid.so 00:04:08.959 LIB libspdk_bdev_virtio.a 00:04:08.959 SO libspdk_bdev_virtio.so.6.0 00:04:08.959 SYMLINK libspdk_bdev_virtio.so 00:04:09.218 LIB libspdk_bdev_nvme.a 00:04:09.218 SO libspdk_bdev_nvme.so.7.0 00:04:09.477 SYMLINK libspdk_bdev_nvme.so 00:04:10.084 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:10.084 CC module/event/subsystems/iobuf/iobuf.o 00:04:10.084 CC module/event/subsystems/keyring/keyring.o 00:04:10.084 CC module/event/subsystems/scheduler/scheduler.o 00:04:10.084 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:10.084 CC module/event/subsystems/vmd/vmd.o 00:04:10.084 CC module/event/subsystems/sock/sock.o 00:04:10.084 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:10.084 LIB libspdk_event_keyring.a 00:04:10.084 LIB libspdk_event_vhost_blk.a 00:04:10.084 LIB libspdk_event_iobuf.a 00:04:10.084 LIB libspdk_event_sock.a 00:04:10.084 SO libspdk_event_keyring.so.1.0 00:04:10.084 SO libspdk_event_vhost_blk.so.3.0 00:04:10.084 SO libspdk_event_iobuf.so.3.0 00:04:10.084 LIB libspdk_event_scheduler.a 00:04:10.084 SO libspdk_event_sock.so.5.0 00:04:10.084 LIB libspdk_event_vmd.a 00:04:10.084 SYMLINK libspdk_event_keyring.so 00:04:10.084 SYMLINK libspdk_event_vhost_blk.so 00:04:10.084 SO libspdk_event_scheduler.so.4.0 00:04:10.084 SO libspdk_event_vmd.so.6.0 00:04:10.084 SYMLINK libspdk_event_iobuf.so 00:04:10.084 SYMLINK libspdk_event_sock.so 00:04:10.345 SYMLINK libspdk_event_scheduler.so 00:04:10.345 SYMLINK libspdk_event_vmd.so 00:04:10.345 CC module/event/subsystems/accel/accel.o 00:04:10.603 LIB libspdk_event_accel.a 00:04:10.603 SO libspdk_event_accel.so.6.0 00:04:10.603 SYMLINK libspdk_event_accel.so 00:04:10.861 CC module/event/subsystems/bdev/bdev.o 00:04:11.120 LIB libspdk_event_bdev.a 00:04:11.120 SO libspdk_event_bdev.so.6.0 00:04:11.120 SYMLINK libspdk_event_bdev.so 00:04:11.379 CC module/event/subsystems/scsi/scsi.o 00:04:11.379 CC module/event/subsystems/nbd/nbd.o 00:04:11.379 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:11.379 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:11.379 CC module/event/subsystems/ublk/ublk.o 00:04:11.637 LIB libspdk_event_nbd.a 00:04:11.637 LIB libspdk_event_ublk.a 00:04:11.637 SO libspdk_event_nbd.so.6.0 00:04:11.637 LIB libspdk_event_scsi.a 00:04:11.637 SO libspdk_event_ublk.so.3.0 00:04:11.637 SO libspdk_event_scsi.so.6.0 00:04:11.637 SYMLINK libspdk_event_nbd.so 00:04:11.637 SYMLINK libspdk_event_scsi.so 00:04:11.637 SYMLINK libspdk_event_ublk.so 00:04:11.637 LIB libspdk_event_nvmf.a 00:04:11.896 SO libspdk_event_nvmf.so.6.0 00:04:11.896 SYMLINK libspdk_event_nvmf.so 00:04:11.896 CC module/event/subsystems/iscsi/iscsi.o 00:04:11.896 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:12.154 LIB libspdk_event_vhost_scsi.a 00:04:12.154 SO libspdk_event_vhost_scsi.so.3.0 00:04:12.154 LIB libspdk_event_iscsi.a 00:04:12.154 SO libspdk_event_iscsi.so.6.0 00:04:12.154 SYMLINK libspdk_event_vhost_scsi.so 00:04:12.154 SYMLINK libspdk_event_iscsi.so 00:04:12.412 SO libspdk.so.6.0 00:04:12.412 SYMLINK libspdk.so 00:04:12.670 CXX app/trace/trace.o 00:04:12.671 CC app/trace_record/trace_record.o 00:04:12.671 CC app/iscsi_tgt/iscsi_tgt.o 00:04:12.671 CC app/nvmf_tgt/nvmf_main.o 00:04:12.671 CC app/spdk_tgt/spdk_tgt.o 00:04:12.671 CC examples/ioat/perf/perf.o 00:04:12.929 CC examples/accel/perf/accel_perf.o 00:04:12.929 CC examples/bdev/hello_world/hello_bdev.o 00:04:12.929 CC examples/blob/hello_world/hello_blob.o 00:04:12.929 CC test/accel/dif/dif.o 00:04:12.929 LINK iscsi_tgt 00:04:12.929 LINK spdk_tgt 00:04:12.929 LINK ioat_perf 00:04:13.187 LINK nvmf_tgt 00:04:13.187 LINK spdk_trace_record 00:04:13.187 LINK hello_blob 00:04:13.187 LINK hello_bdev 00:04:13.187 LINK spdk_trace 00:04:13.187 CC examples/ioat/verify/verify.o 00:04:13.447 LINK accel_perf 00:04:13.447 CC app/spdk_lspci/spdk_lspci.o 00:04:13.447 CC app/spdk_nvme_perf/perf.o 00:04:13.447 LINK dif 00:04:13.447 CC examples/bdev/bdevperf/bdevperf.o 00:04:13.447 CC app/spdk_nvme_identify/identify.o 00:04:13.447 LINK spdk_lspci 00:04:13.447 CC examples/blob/cli/blobcli.o 00:04:13.447 LINK verify 00:04:13.705 CC test/app/bdev_svc/bdev_svc.o 00:04:13.705 CC test/bdev/bdevio/bdevio.o 00:04:13.705 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:13.705 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:13.705 LINK bdev_svc 00:04:13.964 CC examples/nvme/hello_world/hello_world.o 00:04:13.964 CC examples/sock/hello_world/hello_sock.o 00:04:13.964 LINK blobcli 00:04:13.964 LINK bdevio 00:04:14.222 LINK nvme_fuzz 00:04:14.222 LINK hello_world 00:04:14.222 CC app/spdk_nvme_discover/discovery_aer.o 00:04:14.222 LINK spdk_nvme_perf 00:04:14.222 LINK bdevperf 00:04:14.222 LINK hello_sock 00:04:14.222 LINK spdk_nvme_identify 00:04:14.481 LINK spdk_nvme_discover 00:04:14.481 CC app/spdk_top/spdk_top.o 00:04:14.481 CC examples/nvme/reconnect/reconnect.o 00:04:14.481 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:14.481 CC app/vhost/vhost.o 00:04:14.481 CC examples/nvme/arbitration/arbitration.o 00:04:14.481 TEST_HEADER include/spdk/accel.h 00:04:14.481 TEST_HEADER include/spdk/accel_module.h 00:04:14.481 TEST_HEADER include/spdk/assert.h 00:04:14.481 TEST_HEADER include/spdk/barrier.h 00:04:14.481 TEST_HEADER include/spdk/base64.h 00:04:14.481 TEST_HEADER include/spdk/bdev.h 00:04:14.481 TEST_HEADER include/spdk/bdev_module.h 00:04:14.481 TEST_HEADER include/spdk/bdev_zone.h 00:04:14.481 TEST_HEADER include/spdk/bit_array.h 00:04:14.481 TEST_HEADER include/spdk/bit_pool.h 00:04:14.481 TEST_HEADER include/spdk/blob_bdev.h 00:04:14.481 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:14.481 TEST_HEADER include/spdk/blobfs.h 00:04:14.481 TEST_HEADER include/spdk/blob.h 00:04:14.481 TEST_HEADER include/spdk/conf.h 00:04:14.481 TEST_HEADER include/spdk/config.h 00:04:14.481 TEST_HEADER include/spdk/cpuset.h 00:04:14.739 TEST_HEADER include/spdk/crc16.h 00:04:14.739 TEST_HEADER include/spdk/crc32.h 00:04:14.739 TEST_HEADER include/spdk/crc64.h 00:04:14.739 TEST_HEADER include/spdk/dif.h 00:04:14.739 TEST_HEADER include/spdk/dma.h 00:04:14.739 TEST_HEADER include/spdk/endian.h 00:04:14.739 TEST_HEADER include/spdk/env_dpdk.h 00:04:14.739 TEST_HEADER include/spdk/env.h 00:04:14.739 TEST_HEADER include/spdk/event.h 00:04:14.739 TEST_HEADER include/spdk/fd_group.h 00:04:14.739 TEST_HEADER include/spdk/fd.h 00:04:14.739 TEST_HEADER include/spdk/file.h 00:04:14.739 TEST_HEADER include/spdk/ftl.h 00:04:14.739 TEST_HEADER include/spdk/gpt_spec.h 00:04:14.739 TEST_HEADER include/spdk/hexlify.h 00:04:14.739 TEST_HEADER include/spdk/histogram_data.h 00:04:14.739 TEST_HEADER include/spdk/idxd.h 00:04:14.739 TEST_HEADER include/spdk/idxd_spec.h 00:04:14.739 TEST_HEADER include/spdk/init.h 00:04:14.739 TEST_HEADER include/spdk/ioat.h 00:04:14.739 TEST_HEADER include/spdk/ioat_spec.h 00:04:14.739 TEST_HEADER include/spdk/iscsi_spec.h 00:04:14.739 TEST_HEADER include/spdk/json.h 00:04:14.739 TEST_HEADER include/spdk/jsonrpc.h 00:04:14.739 CC test/blobfs/mkfs/mkfs.o 00:04:14.739 TEST_HEADER include/spdk/keyring.h 00:04:14.739 TEST_HEADER include/spdk/keyring_module.h 00:04:14.739 TEST_HEADER include/spdk/likely.h 00:04:14.739 TEST_HEADER include/spdk/log.h 00:04:14.739 TEST_HEADER include/spdk/lvol.h 00:04:14.739 TEST_HEADER include/spdk/memory.h 00:04:14.739 TEST_HEADER include/spdk/mmio.h 00:04:14.739 TEST_HEADER include/spdk/nbd.h 00:04:14.739 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:14.739 TEST_HEADER include/spdk/notify.h 00:04:14.739 TEST_HEADER include/spdk/nvme.h 00:04:14.739 TEST_HEADER include/spdk/nvme_intel.h 00:04:14.739 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:14.739 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:14.739 TEST_HEADER include/spdk/nvme_spec.h 00:04:14.739 TEST_HEADER include/spdk/nvme_zns.h 00:04:14.739 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:14.739 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:14.739 TEST_HEADER include/spdk/nvmf.h 00:04:14.739 TEST_HEADER include/spdk/nvmf_spec.h 00:04:14.739 TEST_HEADER include/spdk/nvmf_transport.h 00:04:14.739 TEST_HEADER include/spdk/opal.h 00:04:14.739 TEST_HEADER include/spdk/opal_spec.h 00:04:14.739 TEST_HEADER include/spdk/pci_ids.h 00:04:14.739 TEST_HEADER include/spdk/pipe.h 00:04:14.739 TEST_HEADER include/spdk/queue.h 00:04:14.739 LINK vhost 00:04:14.739 TEST_HEADER include/spdk/reduce.h 00:04:14.739 TEST_HEADER include/spdk/rpc.h 00:04:14.739 TEST_HEADER include/spdk/scheduler.h 00:04:14.739 TEST_HEADER include/spdk/scsi.h 00:04:14.739 TEST_HEADER include/spdk/scsi_spec.h 00:04:14.739 TEST_HEADER include/spdk/sock.h 00:04:14.739 TEST_HEADER include/spdk/stdinc.h 00:04:14.739 TEST_HEADER include/spdk/string.h 00:04:14.739 TEST_HEADER include/spdk/thread.h 00:04:14.739 TEST_HEADER include/spdk/trace.h 00:04:14.739 TEST_HEADER include/spdk/trace_parser.h 00:04:14.739 TEST_HEADER include/spdk/tree.h 00:04:14.739 TEST_HEADER include/spdk/ublk.h 00:04:14.739 TEST_HEADER include/spdk/util.h 00:04:14.739 TEST_HEADER include/spdk/uuid.h 00:04:14.739 TEST_HEADER include/spdk/version.h 00:04:14.739 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:14.739 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:14.739 TEST_HEADER include/spdk/vhost.h 00:04:14.739 TEST_HEADER include/spdk/vmd.h 00:04:14.739 TEST_HEADER include/spdk/xor.h 00:04:14.739 TEST_HEADER include/spdk/zipf.h 00:04:14.739 CXX test/cpp_headers/accel.o 00:04:14.739 CC test/dma/test_dma/test_dma.o 00:04:14.998 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:14.998 LINK mkfs 00:04:14.998 LINK arbitration 00:04:14.998 LINK reconnect 00:04:14.998 LINK nvme_manage 00:04:14.998 CXX test/cpp_headers/accel_module.o 00:04:14.998 CXX test/cpp_headers/assert.o 00:04:15.256 CC app/spdk_dd/spdk_dd.o 00:04:15.256 LINK test_dma 00:04:15.256 LINK spdk_top 00:04:15.256 CC app/fio/nvme/fio_plugin.o 00:04:15.256 CC examples/nvme/hotplug/hotplug.o 00:04:15.256 LINK vhost_fuzz 00:04:15.256 CXX test/cpp_headers/barrier.o 00:04:15.256 CC examples/vmd/lsvmd/lsvmd.o 00:04:15.515 CXX test/cpp_headers/base64.o 00:04:15.515 CC examples/nvmf/nvmf/nvmf.o 00:04:15.515 LINK iscsi_fuzz 00:04:15.515 CXX test/cpp_headers/bdev.o 00:04:15.515 CXX test/cpp_headers/bdev_module.o 00:04:15.515 LINK lsvmd 00:04:15.515 LINK hotplug 00:04:15.515 LINK spdk_dd 00:04:15.773 CXX test/cpp_headers/bdev_zone.o 00:04:15.773 CXX test/cpp_headers/bit_array.o 00:04:15.773 LINK nvmf 00:04:15.773 CC examples/util/zipf/zipf.o 00:04:15.773 CC examples/vmd/led/led.o 00:04:15.773 LINK spdk_nvme 00:04:15.773 CC examples/thread/thread/thread_ex.o 00:04:16.031 CC examples/idxd/perf/perf.o 00:04:16.031 CC test/app/histogram_perf/histogram_perf.o 00:04:16.031 CXX test/cpp_headers/bit_pool.o 00:04:16.031 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:16.031 LINK zipf 00:04:16.031 LINK led 00:04:16.031 CC examples/nvme/abort/abort.o 00:04:16.031 CC app/fio/bdev/fio_plugin.o 00:04:16.031 CXX test/cpp_headers/blob_bdev.o 00:04:16.031 CXX test/cpp_headers/blobfs_bdev.o 00:04:16.031 LINK histogram_perf 00:04:16.289 CXX test/cpp_headers/blobfs.o 00:04:16.289 LINK cmb_copy 00:04:16.289 LINK thread 00:04:16.289 CC test/app/jsoncat/jsoncat.o 00:04:16.289 LINK idxd_perf 00:04:16.289 CXX test/cpp_headers/blob.o 00:04:16.289 LINK jsoncat 00:04:16.289 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:16.575 CXX test/cpp_headers/conf.o 00:04:16.575 LINK abort 00:04:16.575 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:16.575 CXX test/cpp_headers/config.o 00:04:16.575 CXX test/cpp_headers/cpuset.o 00:04:16.575 LINK pmr_persistence 00:04:16.575 CC test/event/event_perf/event_perf.o 00:04:16.575 LINK spdk_bdev 00:04:16.575 CC test/env/vtophys/vtophys.o 00:04:16.575 CC test/env/mem_callbacks/mem_callbacks.o 00:04:16.575 CC test/app/stub/stub.o 00:04:16.846 LINK interrupt_tgt 00:04:16.846 LINK event_perf 00:04:16.846 CXX test/cpp_headers/crc16.o 00:04:16.846 LINK vtophys 00:04:16.846 LINK stub 00:04:16.846 LINK mem_callbacks 00:04:16.846 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:16.846 CC test/lvol/esnap/esnap.o 00:04:16.846 CC test/nvme/aer/aer.o 00:04:17.103 CXX test/cpp_headers/crc32.o 00:04:17.103 CC test/event/reactor_perf/reactor_perf.o 00:04:17.103 CC test/event/reactor/reactor.o 00:04:17.103 LINK env_dpdk_post_init 00:04:17.103 CC test/event/app_repeat/app_repeat.o 00:04:17.103 CC test/env/memory/memory_ut.o 00:04:17.103 CXX test/cpp_headers/crc64.o 00:04:17.103 CC test/event/scheduler/scheduler.o 00:04:17.103 LINK reactor_perf 00:04:17.103 LINK reactor 00:04:17.361 LINK aer 00:04:17.361 LINK app_repeat 00:04:17.361 CXX test/cpp_headers/dif.o 00:04:17.361 CC test/env/pci/pci_ut.o 00:04:17.361 CXX test/cpp_headers/dma.o 00:04:17.361 LINK scheduler 00:04:17.361 CC test/nvme/reset/reset.o 00:04:17.618 CC test/nvme/sgl/sgl.o 00:04:17.618 CC test/rpc_client/rpc_client_test.o 00:04:17.618 CXX test/cpp_headers/endian.o 00:04:17.618 CXX test/cpp_headers/env_dpdk.o 00:04:17.876 CC test/thread/poller_perf/poller_perf.o 00:04:17.876 LINK rpc_client_test 00:04:17.876 LINK pci_ut 00:04:17.876 LINK reset 00:04:17.876 CC test/nvme/e2edp/nvme_dp.o 00:04:17.877 LINK sgl 00:04:17.877 CXX test/cpp_headers/env.o 00:04:17.877 LINK poller_perf 00:04:17.877 LINK memory_ut 00:04:18.134 CXX test/cpp_headers/event.o 00:04:18.134 CC test/nvme/overhead/overhead.o 00:04:18.134 CXX test/cpp_headers/fd_group.o 00:04:18.134 CXX test/cpp_headers/fd.o 00:04:18.134 LINK nvme_dp 00:04:18.134 CC test/nvme/err_injection/err_injection.o 00:04:18.134 CC test/nvme/startup/startup.o 00:04:18.392 CC test/nvme/reserve/reserve.o 00:04:18.392 CC test/nvme/simple_copy/simple_copy.o 00:04:18.392 CXX test/cpp_headers/file.o 00:04:18.392 LINK overhead 00:04:18.392 LINK err_injection 00:04:18.392 LINK startup 00:04:18.392 CC test/nvme/connect_stress/connect_stress.o 00:04:18.652 LINK reserve 00:04:18.652 CC test/nvme/boot_partition/boot_partition.o 00:04:18.652 LINK simple_copy 00:04:18.652 CXX test/cpp_headers/ftl.o 00:04:18.652 LINK connect_stress 00:04:18.652 CC test/nvme/compliance/nvme_compliance.o 00:04:18.652 LINK boot_partition 00:04:18.652 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:18.652 CC test/nvme/fused_ordering/fused_ordering.o 00:04:18.911 CXX test/cpp_headers/gpt_spec.o 00:04:18.911 CC test/nvme/fdp/fdp.o 00:04:18.911 CC test/nvme/cuse/cuse.o 00:04:18.911 CXX test/cpp_headers/hexlify.o 00:04:18.911 CXX test/cpp_headers/histogram_data.o 00:04:18.911 LINK doorbell_aers 00:04:18.911 LINK fused_ordering 00:04:18.911 CXX test/cpp_headers/idxd.o 00:04:19.169 CXX test/cpp_headers/idxd_spec.o 00:04:19.169 CXX test/cpp_headers/init.o 00:04:19.169 LINK nvme_compliance 00:04:19.169 LINK fdp 00:04:19.169 CXX test/cpp_headers/ioat.o 00:04:19.169 CXX test/cpp_headers/ioat_spec.o 00:04:19.169 CXX test/cpp_headers/iscsi_spec.o 00:04:19.427 CXX test/cpp_headers/json.o 00:04:19.427 CXX test/cpp_headers/jsonrpc.o 00:04:19.427 CXX test/cpp_headers/keyring.o 00:04:19.427 CXX test/cpp_headers/keyring_module.o 00:04:19.686 CXX test/cpp_headers/likely.o 00:04:19.686 CXX test/cpp_headers/log.o 00:04:19.686 CXX test/cpp_headers/lvol.o 00:04:19.686 CXX test/cpp_headers/memory.o 00:04:19.686 CXX test/cpp_headers/mmio.o 00:04:19.686 CXX test/cpp_headers/nbd.o 00:04:19.686 CXX test/cpp_headers/notify.o 00:04:19.686 CXX test/cpp_headers/nvme.o 00:04:19.686 CXX test/cpp_headers/nvme_intel.o 00:04:19.686 CXX test/cpp_headers/nvme_ocssd.o 00:04:19.944 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:19.944 CXX test/cpp_headers/nvme_spec.o 00:04:19.944 CXX test/cpp_headers/nvme_zns.o 00:04:19.944 CXX test/cpp_headers/nvmf_cmd.o 00:04:19.944 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:19.944 CXX test/cpp_headers/nvmf.o 00:04:19.944 CXX test/cpp_headers/nvmf_spec.o 00:04:19.944 CXX test/cpp_headers/nvmf_transport.o 00:04:19.944 CXX test/cpp_headers/opal.o 00:04:19.944 CXX test/cpp_headers/opal_spec.o 00:04:20.203 CXX test/cpp_headers/pci_ids.o 00:04:20.203 CXX test/cpp_headers/pipe.o 00:04:20.203 CXX test/cpp_headers/queue.o 00:04:20.203 CXX test/cpp_headers/reduce.o 00:04:20.203 CXX test/cpp_headers/rpc.o 00:04:20.203 CXX test/cpp_headers/scheduler.o 00:04:20.203 CXX test/cpp_headers/scsi.o 00:04:20.203 CXX test/cpp_headers/scsi_spec.o 00:04:20.203 CXX test/cpp_headers/sock.o 00:04:20.203 CXX test/cpp_headers/stdinc.o 00:04:20.462 LINK cuse 00:04:20.462 CXX test/cpp_headers/string.o 00:04:20.462 CXX test/cpp_headers/thread.o 00:04:20.462 CXX test/cpp_headers/trace.o 00:04:20.719 CXX test/cpp_headers/trace_parser.o 00:04:20.719 CXX test/cpp_headers/tree.o 00:04:20.719 CXX test/cpp_headers/ublk.o 00:04:20.719 CXX test/cpp_headers/util.o 00:04:20.719 CXX test/cpp_headers/uuid.o 00:04:20.719 CXX test/cpp_headers/version.o 00:04:20.719 CXX test/cpp_headers/vfio_user_pci.o 00:04:20.719 CXX test/cpp_headers/vfio_user_spec.o 00:04:20.977 CXX test/cpp_headers/vhost.o 00:04:20.977 CXX test/cpp_headers/xor.o 00:04:20.977 CXX test/cpp_headers/vmd.o 00:04:20.977 CXX test/cpp_headers/zipf.o 00:04:22.350 LINK esnap 00:04:23.737 00:04:23.737 real 1m0.092s 00:04:23.737 user 5m39.750s 00:04:23.737 sys 1m14.335s 00:04:23.737 13:03:20 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:23.737 13:03:20 make -- common/autotest_common.sh@10 -- $ set +x 00:04:23.737 ************************************ 00:04:23.737 END TEST make 00:04:23.737 ************************************ 00:04:23.737 13:03:20 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:23.737 13:03:20 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:23.737 13:03:20 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:23.737 13:03:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.737 13:03:20 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:23.737 13:03:20 -- pm/common@44 -- $ pid=6048 00:04:23.737 13:03:20 -- pm/common@50 -- $ kill -TERM 6048 00:04:23.737 13:03:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.737 13:03:20 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:23.737 13:03:20 -- pm/common@44 -- $ pid=6050 00:04:23.737 13:03:20 -- pm/common@50 -- $ kill -TERM 6050 00:04:23.737 13:03:20 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:23.737 13:03:20 -- nvmf/common.sh@7 -- # uname -s 00:04:23.737 13:03:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:23.737 13:03:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:23.737 13:03:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:23.737 13:03:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:23.737 13:03:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:23.737 13:03:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:23.737 13:03:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:23.737 13:03:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:23.737 13:03:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:23.737 13:03:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:23.737 13:03:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:04:23.737 13:03:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:04:23.737 13:03:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:23.737 13:03:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:23.737 13:03:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:23.737 13:03:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:23.738 13:03:20 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:23.738 13:03:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:23.738 13:03:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:23.738 13:03:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:23.738 13:03:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.738 13:03:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.738 13:03:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.738 13:03:20 -- paths/export.sh@5 -- # export PATH 00:04:23.738 13:03:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.738 13:03:20 -- nvmf/common.sh@47 -- # : 0 00:04:23.738 13:03:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:23.738 13:03:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:23.738 13:03:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:23.738 13:03:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:23.738 13:03:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:23.738 13:03:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:23.738 13:03:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:23.738 13:03:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:23.738 13:03:20 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:23.738 13:03:20 -- spdk/autotest.sh@32 -- # uname -s 00:04:23.738 13:03:20 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:23.738 13:03:20 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:23.738 13:03:20 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:23.738 13:03:20 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:23.738 13:03:20 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:23.738 13:03:20 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:23.738 13:03:20 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:23.738 13:03:20 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:23.738 13:03:20 -- spdk/autotest.sh@48 -- # udevadm_pid=66786 00:04:23.738 13:03:20 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:23.738 13:03:20 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:23.738 13:03:20 -- pm/common@17 -- # local monitor 00:04:23.738 13:03:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.738 13:03:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.738 13:03:20 -- pm/common@25 -- # sleep 1 00:04:23.738 13:03:20 -- pm/common@21 -- # date +%s 00:04:23.738 13:03:20 -- pm/common@21 -- # date +%s 00:04:23.738 13:03:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721048600 00:04:23.738 13:03:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721048600 00:04:23.738 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721048600_collect-vmstat.pm.log 00:04:23.738 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721048600_collect-cpu-load.pm.log 00:04:24.671 13:03:21 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:24.671 13:03:21 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:24.671 13:03:21 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:24.671 13:03:21 -- common/autotest_common.sh@10 -- # set +x 00:04:24.671 13:03:21 -- spdk/autotest.sh@59 -- # create_test_list 00:04:24.671 13:03:21 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:24.671 13:03:21 -- common/autotest_common.sh@10 -- # set +x 00:04:24.671 13:03:21 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:24.671 13:03:21 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:24.671 13:03:21 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:24.671 13:03:21 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:24.671 13:03:21 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:24.671 13:03:21 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:24.671 13:03:21 -- common/autotest_common.sh@1451 -- # uname 00:04:24.671 13:03:21 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:24.671 13:03:21 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:24.671 13:03:21 -- common/autotest_common.sh@1471 -- # uname 00:04:24.671 13:03:21 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:24.929 13:03:21 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:24.929 13:03:21 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:24.929 13:03:21 -- spdk/autotest.sh@72 -- # hash lcov 00:04:24.929 13:03:21 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:24.929 13:03:21 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:24.929 --rc lcov_branch_coverage=1 00:04:24.929 --rc lcov_function_coverage=1 00:04:24.929 --rc genhtml_branch_coverage=1 00:04:24.929 --rc genhtml_function_coverage=1 00:04:24.929 --rc genhtml_legend=1 00:04:24.929 --rc geninfo_all_blocks=1 00:04:24.929 ' 00:04:24.929 13:03:21 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:24.929 --rc lcov_branch_coverage=1 00:04:24.929 --rc lcov_function_coverage=1 00:04:24.929 --rc genhtml_branch_coverage=1 00:04:24.929 --rc genhtml_function_coverage=1 00:04:24.929 --rc genhtml_legend=1 00:04:24.929 --rc geninfo_all_blocks=1 00:04:24.929 ' 00:04:24.929 13:03:21 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:24.929 --rc lcov_branch_coverage=1 00:04:24.929 --rc lcov_function_coverage=1 00:04:24.929 --rc genhtml_branch_coverage=1 00:04:24.929 --rc genhtml_function_coverage=1 00:04:24.929 --rc genhtml_legend=1 00:04:24.929 --rc geninfo_all_blocks=1 00:04:24.929 --no-external' 00:04:24.929 13:03:21 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:24.929 --rc lcov_branch_coverage=1 00:04:24.929 --rc lcov_function_coverage=1 00:04:24.929 --rc genhtml_branch_coverage=1 00:04:24.929 --rc genhtml_function_coverage=1 00:04:24.929 --rc genhtml_legend=1 00:04:24.929 --rc geninfo_all_blocks=1 00:04:24.929 --no-external' 00:04:24.929 13:03:21 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:24.929 lcov: LCOV version 1.14 00:04:24.929 13:03:21 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:39.795 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:39.795 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:51.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:51.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:51.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:51.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:51.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:51.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:51.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:51.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:51.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:51.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:51.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:51.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:51.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:51.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:51.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:51.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:51.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:51.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:51.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:51.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:51.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:51.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:51.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:51.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:51.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:51.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:51.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:51.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:51.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:51.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:51.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:51.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:51.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:51.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:51.988 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:51.989 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:51.989 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:51.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:51.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:51.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:51.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:51.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:51.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:51.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:51.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:51.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:51.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:51.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:51.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:51.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:51.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:51.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:56.172 13:03:52 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:56.172 13:03:52 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:56.172 13:03:52 -- common/autotest_common.sh@10 -- # set +x 00:04:56.172 13:03:52 -- spdk/autotest.sh@91 -- # rm -f 00:04:56.172 13:03:52 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:56.430 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:56.430 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:56.430 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:56.430 13:03:53 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:56.430 13:03:53 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:56.430 13:03:53 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:56.430 13:03:53 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:56.430 13:03:53 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:56.430 13:03:53 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:56.430 13:03:53 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:56.430 13:03:53 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:56.430 13:03:53 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:56.430 13:03:53 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:56.430 13:03:53 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:04:56.430 13:03:53 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:04:56.430 13:03:53 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:56.430 13:03:53 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:56.430 13:03:53 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:56.430 13:03:53 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:04:56.430 13:03:53 -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:04:56.430 13:03:53 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:56.430 13:03:53 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:56.430 13:03:53 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:56.430 13:03:53 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:04:56.430 13:03:53 -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:04:56.430 13:03:53 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:56.430 13:03:53 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:56.430 13:03:53 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:56.430 13:03:53 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:56.430 13:03:53 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:56.430 13:03:53 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:56.430 13:03:53 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:56.430 13:03:53 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:56.430 No valid GPT data, bailing 00:04:56.430 13:03:53 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:56.430 13:03:53 -- scripts/common.sh@391 -- # pt= 00:04:56.430 13:03:53 -- scripts/common.sh@392 -- # return 1 00:04:56.430 13:03:53 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:56.430 1+0 records in 00:04:56.430 1+0 records out 00:04:56.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00369229 s, 284 MB/s 00:04:56.430 13:03:53 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:56.430 13:03:53 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:56.430 13:03:53 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:56.430 13:03:53 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:56.430 13:03:53 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:56.430 No valid GPT data, bailing 00:04:56.430 13:03:53 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:56.690 13:03:53 -- scripts/common.sh@391 -- # pt= 00:04:56.690 13:03:53 -- scripts/common.sh@392 -- # return 1 00:04:56.690 13:03:53 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:56.690 1+0 records in 00:04:56.690 1+0 records out 00:04:56.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00483783 s, 217 MB/s 00:04:56.690 13:03:53 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:56.690 13:03:53 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:56.690 13:03:53 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:56.690 13:03:53 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:56.690 13:03:53 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:56.690 No valid GPT data, bailing 00:04:56.690 13:03:53 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:56.690 13:03:53 -- scripts/common.sh@391 -- # pt= 00:04:56.690 13:03:53 -- scripts/common.sh@392 -- # return 1 00:04:56.690 13:03:53 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:56.690 1+0 records in 00:04:56.690 1+0 records out 00:04:56.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00427286 s, 245 MB/s 00:04:56.690 13:03:53 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:56.690 13:03:53 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:56.690 13:03:53 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:56.690 13:03:53 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:56.690 13:03:53 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:56.690 No valid GPT data, bailing 00:04:56.690 13:03:53 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:56.690 13:03:53 -- scripts/common.sh@391 -- # pt= 00:04:56.690 13:03:53 -- scripts/common.sh@392 -- # return 1 00:04:56.690 13:03:53 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:56.690 1+0 records in 00:04:56.690 1+0 records out 00:04:56.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00487062 s, 215 MB/s 00:04:56.690 13:03:53 -- spdk/autotest.sh@118 -- # sync 00:04:56.690 13:03:53 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:56.690 13:03:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:56.690 13:03:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:58.592 13:03:55 -- spdk/autotest.sh@124 -- # uname -s 00:04:58.592 13:03:55 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:58.593 13:03:55 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:58.593 13:03:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:58.593 13:03:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:58.593 13:03:55 -- common/autotest_common.sh@10 -- # set +x 00:04:58.593 ************************************ 00:04:58.593 START TEST setup.sh 00:04:58.593 ************************************ 00:04:58.593 13:03:55 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:58.852 * Looking for test storage... 00:04:58.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:58.852 13:03:55 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:58.852 13:03:55 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:58.852 13:03:55 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:58.852 13:03:55 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:58.852 13:03:55 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:58.852 13:03:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:58.852 ************************************ 00:04:58.852 START TEST acl 00:04:58.852 ************************************ 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:58.852 * Looking for test storage... 00:04:58.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:58.852 13:03:55 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:58.852 13:03:55 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:58.852 13:03:55 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:58.852 13:03:55 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:58.852 13:03:55 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:58.852 13:03:55 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:58.852 13:03:55 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:58.852 13:03:55 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:58.852 13:03:55 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:59.787 13:03:56 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:59.787 13:03:56 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:59.787 13:03:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.787 13:03:56 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:59.787 13:03:56 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.787 13:03:56 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:00.354 13:03:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:00.354 13:03:56 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:00.354 13:03:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.354 Hugepages 00:05:00.354 node hugesize free / total 00:05:00.354 13:03:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:00.354 13:03:56 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:00.354 13:03:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.354 00:05:00.354 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:00.354 13:03:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:00.354 13:03:56 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:00.354 13:03:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.354 13:03:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:00.354 13:03:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:00.354 13:03:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:00.354 13:03:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.354 13:03:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:00.354 13:03:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:00.354 13:03:56 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:00.354 13:03:56 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:00.354 13:03:56 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:00.354 13:03:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.354 13:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:00.354 13:03:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:00.354 13:03:57 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:00.354 13:03:57 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:00.354 13:03:57 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:00.354 13:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:00.354 13:03:57 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:00.354 13:03:57 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:00.354 13:03:57 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:00.354 13:03:57 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.354 13:03:57 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:00.354 ************************************ 00:05:00.354 START TEST denied 00:05:00.354 ************************************ 00:05:00.354 13:03:57 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:05:00.354 13:03:57 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:00.354 13:03:57 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:00.354 13:03:57 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:00.354 13:03:57 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.354 13:03:57 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:01.289 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:01.289 13:03:57 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:01.289 13:03:57 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:01.289 13:03:57 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:01.289 13:03:57 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:01.289 13:03:57 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:01.289 13:03:57 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:01.289 13:03:57 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:01.289 13:03:57 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:01.289 13:03:57 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:01.290 13:03:57 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:01.996 00:05:01.996 real 0m1.448s 00:05:01.996 user 0m0.563s 00:05:01.996 sys 0m0.820s 00:05:01.996 13:03:58 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.996 13:03:58 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:01.996 ************************************ 00:05:01.996 END TEST denied 00:05:01.996 ************************************ 00:05:01.996 13:03:58 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:01.996 13:03:58 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:01.996 13:03:58 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.996 13:03:58 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:01.996 ************************************ 00:05:01.996 START TEST allowed 00:05:01.996 ************************************ 00:05:01.996 13:03:58 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:05:01.996 13:03:58 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:01.996 13:03:58 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:01.996 13:03:58 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.996 13:03:58 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:01.996 13:03:58 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:02.929 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.929 13:03:59 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:05:02.929 13:03:59 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:02.929 13:03:59 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:02.929 13:03:59 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:02.929 13:03:59 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:02.929 13:03:59 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:02.929 13:03:59 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:02.929 13:03:59 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:02.929 13:03:59 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:02.929 13:03:59 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:03.496 ************************************ 00:05:03.496 END TEST allowed 00:05:03.496 ************************************ 00:05:03.496 00:05:03.496 real 0m1.509s 00:05:03.496 user 0m0.653s 00:05:03.496 sys 0m0.848s 00:05:03.496 13:04:00 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.496 13:04:00 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:03.496 ************************************ 00:05:03.496 END TEST acl 00:05:03.496 ************************************ 00:05:03.496 00:05:03.496 real 0m4.735s 00:05:03.496 user 0m2.066s 00:05:03.496 sys 0m2.597s 00:05:03.496 13:04:00 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.496 13:04:00 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:03.496 13:04:00 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:03.496 13:04:00 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.496 13:04:00 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.496 13:04:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:03.496 ************************************ 00:05:03.496 START TEST hugepages 00:05:03.496 ************************************ 00:05:03.496 13:04:00 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:03.496 * Looking for test storage... 00:05:03.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 4742316 kB' 'MemAvailable: 7388104 kB' 'Buffers: 2436 kB' 'Cached: 2847852 kB' 'SwapCached: 0 kB' 'Active: 476672 kB' 'Inactive: 2477572 kB' 'Active(anon): 114448 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477572 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 105860 kB' 'Mapped: 48624 kB' 'Shmem: 10492 kB' 'KReclaimable: 85860 kB' 'Slab: 165708 kB' 'SReclaimable: 85860 kB' 'SUnreclaim: 79848 kB' 'KernelStack: 6632 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 333920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.496 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.497 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.756 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:03.757 13:04:00 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:03.757 13:04:00 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.757 13:04:00 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.757 13:04:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:03.757 ************************************ 00:05:03.757 START TEST default_setup 00:05:03.757 ************************************ 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.757 13:04:00 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:04.324 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.324 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.587 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6857064 kB' 'MemAvailable: 9502664 kB' 'Buffers: 2436 kB' 'Cached: 2847848 kB' 'SwapCached: 0 kB' 'Active: 493400 kB' 'Inactive: 2477580 kB' 'Active(anon): 131176 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477580 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 122276 kB' 'Mapped: 48792 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165196 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79728 kB' 'KernelStack: 6576 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.587 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.588 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6857420 kB' 'MemAvailable: 9503020 kB' 'Buffers: 2436 kB' 'Cached: 2847848 kB' 'SwapCached: 0 kB' 'Active: 493104 kB' 'Inactive: 2477580 kB' 'Active(anon): 130880 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477580 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122028 kB' 'Mapped: 48636 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165188 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79720 kB' 'KernelStack: 6528 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.589 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6857596 kB' 'MemAvailable: 9503200 kB' 'Buffers: 2436 kB' 'Cached: 2847844 kB' 'SwapCached: 0 kB' 'Active: 493284 kB' 'Inactive: 2477584 kB' 'Active(anon): 131060 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477584 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122268 kB' 'Mapped: 48636 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165188 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79720 kB' 'KernelStack: 6528 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.590 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.591 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:04.592 nr_hugepages=1024 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:04.592 resv_hugepages=0 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:04.592 surplus_hugepages=0 00:05:04.592 anon_hugepages=0 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6857344 kB' 'MemAvailable: 9502948 kB' 'Buffers: 2436 kB' 'Cached: 2847844 kB' 'SwapCached: 0 kB' 'Active: 493108 kB' 'Inactive: 2477584 kB' 'Active(anon): 130884 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477584 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121960 kB' 'Mapped: 48636 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165184 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79716 kB' 'KernelStack: 6496 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.592 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.593 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6857596 kB' 'MemUsed: 5384380 kB' 'SwapCached: 0 kB' 'Active: 492852 kB' 'Inactive: 2477592 kB' 'Active(anon): 130628 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 2850284 kB' 'Mapped: 48636 kB' 'AnonPages: 121760 kB' 'Shmem: 10468 kB' 'KernelStack: 6528 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85468 kB' 'Slab: 165184 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.594 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.595 node0=1024 expecting 1024 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:04.595 00:05:04.595 real 0m0.995s 00:05:04.595 user 0m0.470s 00:05:04.595 sys 0m0.451s 00:05:04.595 ************************************ 00:05:04.595 END TEST default_setup 00:05:04.595 ************************************ 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:04.595 13:04:01 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:04.595 13:04:01 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:04.595 13:04:01 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:04.854 13:04:01 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.854 13:04:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:04.854 ************************************ 00:05:04.854 START TEST per_node_1G_alloc 00:05:04.854 ************************************ 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.854 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.115 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.115 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:05.115 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7902996 kB' 'MemAvailable: 10548608 kB' 'Buffers: 2436 kB' 'Cached: 2847848 kB' 'SwapCached: 0 kB' 'Active: 493332 kB' 'Inactive: 2477592 kB' 'Active(anon): 131108 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122228 kB' 'Mapped: 48700 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165200 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79732 kB' 'KernelStack: 6516 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 350928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.115 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.116 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7903148 kB' 'MemAvailable: 10548760 kB' 'Buffers: 2436 kB' 'Cached: 2847848 kB' 'SwapCached: 0 kB' 'Active: 492628 kB' 'Inactive: 2477592 kB' 'Active(anon): 130404 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 121872 kB' 'Mapped: 48632 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165196 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79728 kB' 'KernelStack: 6544 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 350928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.117 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.118 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7903148 kB' 'MemAvailable: 10548760 kB' 'Buffers: 2436 kB' 'Cached: 2847848 kB' 'SwapCached: 0 kB' 'Active: 492876 kB' 'Inactive: 2477592 kB' 'Active(anon): 130652 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 121860 kB' 'Mapped: 48632 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165196 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79728 kB' 'KernelStack: 6544 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 350928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.119 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.120 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.121 nr_hugepages=512 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:05.121 resv_hugepages=0 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.121 surplus_hugepages=0 00:05:05.121 anon_hugepages=0 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7903148 kB' 'MemAvailable: 10548760 kB' 'Buffers: 2436 kB' 'Cached: 2847848 kB' 'SwapCached: 0 kB' 'Active: 492904 kB' 'Inactive: 2477592 kB' 'Active(anon): 130680 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 121876 kB' 'Mapped: 48632 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165196 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79728 kB' 'KernelStack: 6544 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 350928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.121 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.122 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.123 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7903920 kB' 'MemUsed: 4338056 kB' 'SwapCached: 0 kB' 'Active: 492916 kB' 'Inactive: 2477592 kB' 'Active(anon): 130692 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 2850284 kB' 'Mapped: 48632 kB' 'AnonPages: 121884 kB' 'Shmem: 10468 kB' 'KernelStack: 6544 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85468 kB' 'Slab: 165196 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.382 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.383 node0=512 expecting 512 00:05:05.383 ************************************ 00:05:05.383 END TEST per_node_1G_alloc 00:05:05.383 ************************************ 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:05.383 00:05:05.383 real 0m0.550s 00:05:05.383 user 0m0.265s 00:05:05.383 sys 0m0.289s 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.383 13:04:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:05.383 13:04:01 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:05.383 13:04:01 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:05.383 13:04:01 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.383 13:04:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:05.383 ************************************ 00:05:05.383 START TEST even_2G_alloc 00:05:05.383 ************************************ 00:05:05.383 13:04:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:05:05.383 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.384 13:04:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.643 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.643 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:05.643 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:05.643 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:05.643 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:05.643 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:05.643 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:05.643 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:05.643 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:05.643 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:05.643 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:05.643 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:05.643 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:05.643 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:05.643 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.643 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.643 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.643 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.643 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.643 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.643 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.643 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6855968 kB' 'MemAvailable: 9501580 kB' 'Buffers: 2436 kB' 'Cached: 2847848 kB' 'SwapCached: 0 kB' 'Active: 493296 kB' 'Inactive: 2477592 kB' 'Active(anon): 131072 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122408 kB' 'Mapped: 48676 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165244 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79776 kB' 'KernelStack: 6532 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.644 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6856220 kB' 'MemAvailable: 9501832 kB' 'Buffers: 2436 kB' 'Cached: 2847848 kB' 'SwapCached: 0 kB' 'Active: 492668 kB' 'Inactive: 2477592 kB' 'Active(anon): 130444 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121856 kB' 'Mapped: 48632 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165236 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79768 kB' 'KernelStack: 6544 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.645 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.646 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.907 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6856220 kB' 'MemAvailable: 9501832 kB' 'Buffers: 2436 kB' 'Cached: 2847848 kB' 'SwapCached: 0 kB' 'Active: 492716 kB' 'Inactive: 2477592 kB' 'Active(anon): 130492 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121856 kB' 'Mapped: 48632 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165232 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79764 kB' 'KernelStack: 6544 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.908 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.909 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.910 nr_hugepages=1024 00:05:05.910 resv_hugepages=0 00:05:05.910 surplus_hugepages=0 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.910 anon_hugepages=0 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6856220 kB' 'MemAvailable: 9501832 kB' 'Buffers: 2436 kB' 'Cached: 2847848 kB' 'SwapCached: 0 kB' 'Active: 492872 kB' 'Inactive: 2477592 kB' 'Active(anon): 130648 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121752 kB' 'Mapped: 48632 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165232 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79764 kB' 'KernelStack: 6512 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.910 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.911 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6856220 kB' 'MemUsed: 5385756 kB' 'SwapCached: 0 kB' 'Active: 492896 kB' 'Inactive: 2477592 kB' 'Active(anon): 130672 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 2850284 kB' 'Mapped: 48632 kB' 'AnonPages: 121784 kB' 'Shmem: 10468 kB' 'KernelStack: 6496 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85468 kB' 'Slab: 165228 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.912 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.913 node0=1024 expecting 1024 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:05.913 13:04:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:05.914 00:05:05.914 real 0m0.570s 00:05:05.914 user 0m0.263s 00:05:05.914 sys 0m0.313s 00:05:05.914 ************************************ 00:05:05.914 END TEST even_2G_alloc 00:05:05.914 ************************************ 00:05:05.914 13:04:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.914 13:04:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:05.914 13:04:02 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:05.914 13:04:02 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:05.914 13:04:02 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.914 13:04:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:05.914 ************************************ 00:05:05.914 START TEST odd_alloc 00:05:05.914 ************************************ 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.914 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:06.172 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.435 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.435 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6853756 kB' 'MemAvailable: 9499368 kB' 'Buffers: 2436 kB' 'Cached: 2847848 kB' 'SwapCached: 0 kB' 'Active: 493552 kB' 'Inactive: 2477592 kB' 'Active(anon): 131328 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122196 kB' 'Mapped: 48760 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165236 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79768 kB' 'KernelStack: 6516 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 350928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.435 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.436 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6853504 kB' 'MemAvailable: 9499116 kB' 'Buffers: 2436 kB' 'Cached: 2847848 kB' 'SwapCached: 0 kB' 'Active: 493184 kB' 'Inactive: 2477592 kB' 'Active(anon): 130960 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122064 kB' 'Mapped: 48632 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165272 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79804 kB' 'KernelStack: 6528 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 350928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.437 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6853912 kB' 'MemAvailable: 9499524 kB' 'Buffers: 2436 kB' 'Cached: 2847848 kB' 'SwapCached: 0 kB' 'Active: 492932 kB' 'Inactive: 2477592 kB' 'Active(anon): 130708 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 121808 kB' 'Mapped: 48632 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165272 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79804 kB' 'KernelStack: 6528 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 350928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.438 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.439 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:06.440 nr_hugepages=1025 00:05:06.440 resv_hugepages=0 00:05:06.440 surplus_hugepages=0 00:05:06.440 anon_hugepages=0 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6853912 kB' 'MemAvailable: 9499524 kB' 'Buffers: 2436 kB' 'Cached: 2847848 kB' 'SwapCached: 0 kB' 'Active: 492788 kB' 'Inactive: 2477592 kB' 'Active(anon): 130564 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 121932 kB' 'Mapped: 48632 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165264 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79796 kB' 'KernelStack: 6528 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 350928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.440 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.441 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6854532 kB' 'MemUsed: 5387444 kB' 'SwapCached: 0 kB' 'Active: 492976 kB' 'Inactive: 2477592 kB' 'Active(anon): 130752 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 2850284 kB' 'Mapped: 48632 kB' 'AnonPages: 121904 kB' 'Shmem: 10468 kB' 'KernelStack: 6544 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85468 kB' 'Slab: 165264 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.442 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.443 node0=1025 expecting 1025 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:06.443 00:05:06.443 real 0m0.593s 00:05:06.443 user 0m0.299s 00:05:06.443 sys 0m0.287s 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.443 ************************************ 00:05:06.443 END TEST odd_alloc 00:05:06.443 ************************************ 00:05:06.443 13:04:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:06.701 13:04:03 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:06.701 13:04:03 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:06.701 13:04:03 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.701 13:04:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:06.701 ************************************ 00:05:06.701 START TEST custom_alloc 00:05:06.701 ************************************ 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.701 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:06.961 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.961 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.961 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7901396 kB' 'MemAvailable: 10547008 kB' 'Buffers: 2436 kB' 'Cached: 2847848 kB' 'SwapCached: 0 kB' 'Active: 493424 kB' 'Inactive: 2477592 kB' 'Active(anon): 131200 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122296 kB' 'Mapped: 48764 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165276 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79808 kB' 'KernelStack: 6516 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 350928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.961 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.962 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7901396 kB' 'MemAvailable: 10547008 kB' 'Buffers: 2436 kB' 'Cached: 2847848 kB' 'SwapCached: 0 kB' 'Active: 492724 kB' 'Inactive: 2477592 kB' 'Active(anon): 130500 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 121904 kB' 'Mapped: 48636 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165276 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79808 kB' 'KernelStack: 6544 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 350928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.963 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.964 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7901396 kB' 'MemAvailable: 10547008 kB' 'Buffers: 2436 kB' 'Cached: 2847848 kB' 'SwapCached: 0 kB' 'Active: 492492 kB' 'Inactive: 2477592 kB' 'Active(anon): 130268 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 121704 kB' 'Mapped: 48896 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165276 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79808 kB' 'KernelStack: 6544 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 350560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.965 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.966 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.226 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:07.227 nr_hugepages=512 00:05:07.227 resv_hugepages=0 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:07.227 surplus_hugepages=0 00:05:07.227 anon_hugepages=0 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7901396 kB' 'MemAvailable: 10547008 kB' 'Buffers: 2436 kB' 'Cached: 2847848 kB' 'SwapCached: 0 kB' 'Active: 492952 kB' 'Inactive: 2477592 kB' 'Active(anon): 130728 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 121760 kB' 'Mapped: 48636 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165268 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79800 kB' 'KernelStack: 6512 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 350928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.227 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7901396 kB' 'MemUsed: 4340580 kB' 'SwapCached: 0 kB' 'Active: 492756 kB' 'Inactive: 2477596 kB' 'Active(anon): 130532 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 2850288 kB' 'Mapped: 48636 kB' 'AnonPages: 121908 kB' 'Shmem: 10468 kB' 'KernelStack: 6528 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85468 kB' 'Slab: 165256 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.228 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.229 node0=512 expecting 512 00:05:07.229 ************************************ 00:05:07.229 END TEST custom_alloc 00:05:07.229 ************************************ 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.229 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.230 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:07.230 13:04:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:07.230 00:05:07.230 real 0m0.567s 00:05:07.230 user 0m0.269s 00:05:07.230 sys 0m0.311s 00:05:07.230 13:04:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.230 13:04:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:07.230 13:04:03 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:07.230 13:04:03 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:07.230 13:04:03 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.230 13:04:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:07.230 ************************************ 00:05:07.230 START TEST no_shrink_alloc 00:05:07.230 ************************************ 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.230 13:04:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:07.488 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:07.488 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:07.488 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6851796 kB' 'MemAvailable: 9497408 kB' 'Buffers: 2436 kB' 'Cached: 2847848 kB' 'SwapCached: 0 kB' 'Active: 493256 kB' 'Inactive: 2477592 kB' 'Active(anon): 131032 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122260 kB' 'Mapped: 49084 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165280 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79812 kB' 'KernelStack: 6548 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.751 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.752 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6851796 kB' 'MemAvailable: 9497408 kB' 'Buffers: 2436 kB' 'Cached: 2847852 kB' 'SwapCached: 0 kB' 'Active: 493404 kB' 'Inactive: 2477592 kB' 'Active(anon): 131180 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122128 kB' 'Mapped: 48636 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165264 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79796 kB' 'KernelStack: 6576 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.753 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:07.754 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6851796 kB' 'MemAvailable: 9497408 kB' 'Buffers: 2436 kB' 'Cached: 2847852 kB' 'SwapCached: 0 kB' 'Active: 492760 kB' 'Inactive: 2477592 kB' 'Active(anon): 130536 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 121680 kB' 'Mapped: 48636 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165264 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79796 kB' 'KernelStack: 6544 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.755 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.756 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:07.757 nr_hugepages=1024 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:07.757 resv_hugepages=0 00:05:07.757 surplus_hugepages=0 00:05:07.757 anon_hugepages=0 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6851796 kB' 'MemAvailable: 9497408 kB' 'Buffers: 2436 kB' 'Cached: 2847852 kB' 'SwapCached: 0 kB' 'Active: 492920 kB' 'Inactive: 2477592 kB' 'Active(anon): 130696 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 121836 kB' 'Mapped: 48636 kB' 'Shmem: 10468 kB' 'KReclaimable: 85468 kB' 'Slab: 165260 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79792 kB' 'KernelStack: 6528 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 350928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.757 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.758 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6852316 kB' 'MemUsed: 5389660 kB' 'SwapCached: 0 kB' 'Active: 492812 kB' 'Inactive: 2477592 kB' 'Active(anon): 130588 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 2850288 kB' 'Mapped: 48636 kB' 'AnonPages: 121728 kB' 'Shmem: 10468 kB' 'KernelStack: 6560 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85468 kB' 'Slab: 165260 kB' 'SReclaimable: 85468 kB' 'SUnreclaim: 79792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.759 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.760 node0=1024 expecting 1024 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.760 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:08.019 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.283 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:08.283 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:08.283 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6850672 kB' 'MemAvailable: 9496284 kB' 'Buffers: 2436 kB' 'Cached: 2847852 kB' 'SwapCached: 0 kB' 'Active: 489360 kB' 'Inactive: 2477596 kB' 'Active(anon): 127136 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118392 kB' 'Mapped: 48020 kB' 'Shmem: 10468 kB' 'KReclaimable: 85464 kB' 'Slab: 165060 kB' 'SReclaimable: 85464 kB' 'SUnreclaim: 79596 kB' 'KernelStack: 6452 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.283 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.284 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6850672 kB' 'MemAvailable: 9496284 kB' 'Buffers: 2436 kB' 'Cached: 2847852 kB' 'SwapCached: 0 kB' 'Active: 488844 kB' 'Inactive: 2477596 kB' 'Active(anon): 126620 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 117704 kB' 'Mapped: 47896 kB' 'Shmem: 10468 kB' 'KReclaimable: 85464 kB' 'Slab: 165052 kB' 'SReclaimable: 85464 kB' 'SUnreclaim: 79588 kB' 'KernelStack: 6464 kB' 'PageTables: 3824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.285 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.286 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6850672 kB' 'MemAvailable: 9496284 kB' 'Buffers: 2436 kB' 'Cached: 2847852 kB' 'SwapCached: 0 kB' 'Active: 488644 kB' 'Inactive: 2477596 kB' 'Active(anon): 126420 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 117524 kB' 'Mapped: 47896 kB' 'Shmem: 10468 kB' 'KReclaimable: 85464 kB' 'Slab: 165052 kB' 'SReclaimable: 85464 kB' 'SUnreclaim: 79588 kB' 'KernelStack: 6432 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.287 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.288 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:08.289 nr_hugepages=1024 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:08.289 resv_hugepages=0 00:05:08.289 surplus_hugepages=0 00:05:08.289 anon_hugepages=0 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6850672 kB' 'MemAvailable: 9496284 kB' 'Buffers: 2436 kB' 'Cached: 2847852 kB' 'SwapCached: 0 kB' 'Active: 488704 kB' 'Inactive: 2477596 kB' 'Active(anon): 126480 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 117604 kB' 'Mapped: 47896 kB' 'Shmem: 10468 kB' 'KReclaimable: 85464 kB' 'Slab: 165052 kB' 'SReclaimable: 85464 kB' 'SUnreclaim: 79588 kB' 'KernelStack: 6432 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 6131712 kB' 'DirectMap1G: 8388608 kB' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.289 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.290 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6850672 kB' 'MemUsed: 5391304 kB' 'SwapCached: 0 kB' 'Active: 488688 kB' 'Inactive: 2477596 kB' 'Active(anon): 126464 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2477596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 2850288 kB' 'Mapped: 47896 kB' 'AnonPages: 117572 kB' 'Shmem: 10468 kB' 'KernelStack: 6416 kB' 'PageTables: 3668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85464 kB' 'Slab: 165052 kB' 'SReclaimable: 85464 kB' 'SUnreclaim: 79588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.291 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:08.292 node0=1024 expecting 1024 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:08.292 00:05:08.292 real 0m1.144s 00:05:08.292 user 0m0.558s 00:05:08.292 sys 0m0.591s 00:05:08.292 ************************************ 00:05:08.292 END TEST no_shrink_alloc 00:05:08.292 ************************************ 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.292 13:04:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:08.292 13:04:05 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:08.292 13:04:05 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:08.292 13:04:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:08.292 13:04:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:08.292 13:04:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:08.292 13:04:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:08.292 13:04:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:08.613 13:04:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:08.613 13:04:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:08.613 ************************************ 00:05:08.613 END TEST hugepages 00:05:08.613 ************************************ 00:05:08.613 00:05:08.613 real 0m4.876s 00:05:08.613 user 0m2.279s 00:05:08.613 sys 0m2.507s 00:05:08.613 13:04:05 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.613 13:04:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:08.613 13:04:05 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:08.613 13:04:05 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:08.613 13:04:05 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.613 13:04:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:08.613 ************************************ 00:05:08.613 START TEST driver 00:05:08.613 ************************************ 00:05:08.613 13:04:05 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:08.613 * Looking for test storage... 00:05:08.613 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:08.613 13:04:05 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:08.613 13:04:05 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:08.613 13:04:05 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:09.194 13:04:05 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:09.194 13:04:05 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.194 13:04:05 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.194 13:04:05 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:09.194 ************************************ 00:05:09.194 START TEST guess_driver 00:05:09.194 ************************************ 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:09.194 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:09.194 Looking for driver=uio_pci_generic 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.194 13:04:05 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:09.772 13:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:09.772 13:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:09.772 13:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.772 13:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:09.772 13:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:09.772 13:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.031 13:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.031 13:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:10.031 13:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.031 13:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:10.031 13:04:06 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:10.031 13:04:06 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:10.031 13:04:06 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.599 00:05:10.599 real 0m1.430s 00:05:10.599 user 0m0.526s 00:05:10.599 sys 0m0.908s 00:05:10.599 ************************************ 00:05:10.599 END TEST guess_driver 00:05:10.599 ************************************ 00:05:10.599 13:04:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.599 13:04:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:10.599 00:05:10.599 real 0m2.119s 00:05:10.599 user 0m0.751s 00:05:10.599 sys 0m1.428s 00:05:10.599 13:04:07 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.599 13:04:07 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:10.599 ************************************ 00:05:10.599 END TEST driver 00:05:10.599 ************************************ 00:05:10.599 13:04:07 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:10.599 13:04:07 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:10.599 13:04:07 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.599 13:04:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:10.599 ************************************ 00:05:10.599 START TEST devices 00:05:10.599 ************************************ 00:05:10.599 13:04:07 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:10.599 * Looking for test storage... 00:05:10.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:10.599 13:04:07 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:10.599 13:04:07 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:10.599 13:04:07 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:10.599 13:04:07 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:11.535 13:04:08 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:11.535 13:04:08 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:11.535 13:04:08 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:11.535 No valid GPT data, bailing 00:05:11.535 13:04:08 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:11.535 13:04:08 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:11.535 13:04:08 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:11.535 13:04:08 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:11.535 13:04:08 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:11.535 13:04:08 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:05:11.535 13:04:08 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:05:11.535 13:04:08 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:05:11.535 No valid GPT data, bailing 00:05:11.535 13:04:08 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:11.535 13:04:08 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:11.535 13:04:08 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:05:11.535 13:04:08 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:05:11.535 13:04:08 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:05:11.535 13:04:08 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:11.535 13:04:08 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:05:11.535 13:04:08 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:05:11.535 13:04:08 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:05:11.535 No valid GPT data, bailing 00:05:11.535 13:04:08 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:11.793 13:04:08 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:11.793 13:04:08 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:11.793 13:04:08 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:05:11.793 13:04:08 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:05:11.793 13:04:08 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:05:11.793 13:04:08 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:11.793 13:04:08 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:11.793 13:04:08 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:11.793 13:04:08 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:11.793 13:04:08 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:11.793 13:04:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:11.793 13:04:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:11.793 13:04:08 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:11.793 13:04:08 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:11.793 13:04:08 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:11.793 13:04:08 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:11.793 13:04:08 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:11.793 No valid GPT data, bailing 00:05:11.793 13:04:08 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:11.793 13:04:08 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:11.793 13:04:08 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:11.793 13:04:08 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:11.793 13:04:08 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:11.793 13:04:08 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:11.793 13:04:08 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:11.793 13:04:08 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:11.793 13:04:08 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:11.793 13:04:08 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:11.793 13:04:08 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:11.793 13:04:08 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:11.794 13:04:08 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:11.794 13:04:08 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.794 13:04:08 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.794 13:04:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:11.794 ************************************ 00:05:11.794 START TEST nvme_mount 00:05:11.794 ************************************ 00:05:11.794 13:04:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:05:11.794 13:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:11.794 13:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:11.794 13:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:11.794 13:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:11.794 13:04:08 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:11.794 13:04:08 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:11.794 13:04:08 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:11.794 13:04:08 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:11.794 13:04:08 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:11.794 13:04:08 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:11.794 13:04:08 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:11.794 13:04:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:11.794 13:04:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:11.794 13:04:08 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:11.794 13:04:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:11.794 13:04:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:11.794 13:04:08 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:11.794 13:04:08 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:11.794 13:04:08 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:12.728 Creating new GPT entries in memory. 00:05:12.728 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:12.728 other utilities. 00:05:12.728 13:04:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:12.728 13:04:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:12.728 13:04:09 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:12.728 13:04:09 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:12.728 13:04:09 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:14.104 Creating new GPT entries in memory. 00:05:14.104 The operation has completed successfully. 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 70993 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:14.104 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.361 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:14.361 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.361 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.361 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:14.361 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.361 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:14.362 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:14.362 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:14.362 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.362 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.362 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:14.362 13:04:10 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:14.362 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:14.362 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:14.362 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:14.619 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:14.619 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:14.619 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:14.619 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:14.619 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:14.619 13:04:11 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:14.619 13:04:11 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.619 13:04:11 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:14.619 13:04:11 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:14.619 13:04:11 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.619 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:14.619 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:14.619 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:14.619 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.619 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:14.619 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:14.619 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:14.619 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:14.619 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:14.619 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.619 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:14.619 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:14.619 13:04:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.619 13:04:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:14.876 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:14.876 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:14.876 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:14.876 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.876 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:14.876 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.133 13:04:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:15.411 13:04:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:15.411 13:04:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:15.411 13:04:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:15.411 13:04:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.411 13:04:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:15.411 13:04:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.668 13:04:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:15.668 13:04:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.668 13:04:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:15.668 13:04:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.668 13:04:12 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.668 13:04:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:15.668 13:04:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:15.668 13:04:12 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:15.668 13:04:12 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.668 13:04:12 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.668 13:04:12 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:15.668 13:04:12 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:15.668 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:15.668 00:05:15.668 real 0m3.990s 00:05:15.668 user 0m0.683s 00:05:15.668 sys 0m1.035s 00:05:15.668 13:04:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.668 13:04:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:15.668 ************************************ 00:05:15.668 END TEST nvme_mount 00:05:15.668 ************************************ 00:05:15.668 13:04:12 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:15.668 13:04:12 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.668 13:04:12 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.668 13:04:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:15.924 ************************************ 00:05:15.924 START TEST dm_mount 00:05:15.924 ************************************ 00:05:15.924 13:04:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:15.924 13:04:12 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:15.924 13:04:12 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:15.924 13:04:12 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:15.924 13:04:12 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:15.924 13:04:12 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:15.924 13:04:12 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:15.924 13:04:12 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:15.924 13:04:12 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:15.924 13:04:12 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:15.924 13:04:12 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:15.924 13:04:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:15.924 13:04:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.924 13:04:12 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:15.925 13:04:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:15.925 13:04:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.925 13:04:12 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:15.925 13:04:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:15.925 13:04:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.925 13:04:12 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:15.925 13:04:12 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:15.925 13:04:12 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:16.856 Creating new GPT entries in memory. 00:05:16.856 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:16.856 other utilities. 00:05:16.856 13:04:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:16.856 13:04:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:16.856 13:04:13 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:16.856 13:04:13 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:16.856 13:04:13 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:17.789 Creating new GPT entries in memory. 00:05:17.789 The operation has completed successfully. 00:05:17.789 13:04:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:17.789 13:04:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:17.789 13:04:14 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:17.789 13:04:14 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:17.789 13:04:14 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:19.163 The operation has completed successfully. 00:05:19.163 13:04:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:19.163 13:04:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.163 13:04:15 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 71426 00:05:19.163 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:19.163 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.163 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:19.163 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:19.163 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:19.163 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:19.163 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:19.163 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:19.163 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:19.163 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:19.163 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:19.163 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:19.163 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:19.163 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.163 13:04:15 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:19.163 13:04:15 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.163 13:04:15 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:19.163 13:04:15 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:19.164 13:04:15 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.164 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:19.164 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:19.164 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:19.164 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.164 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:19.164 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:19.164 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:19.164 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:19.164 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:19.164 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.164 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:19.164 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:19.164 13:04:15 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.164 13:04:15 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:19.164 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:19.164 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:19.164 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:19.164 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.164 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:19.164 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.423 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:19.423 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.423 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:19.423 13:04:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.423 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:19.423 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:19.423 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.423 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:19.423 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:19.423 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.423 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:19.423 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:19.423 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:19.423 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:19.423 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:19.423 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:19.423 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:19.423 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:19.423 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.423 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:19.423 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:19.423 13:04:16 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.423 13:04:16 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:19.681 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:19.681 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:19.681 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:19.681 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.681 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:19.681 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.681 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:19.681 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.940 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:19.940 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.940 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:19.940 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:19.940 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:19.940 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:19.940 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.940 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:19.940 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:19.941 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:19.941 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:19.941 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:19.941 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:19.941 13:04:16 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:19.941 00:05:19.941 real 0m4.160s 00:05:19.941 user 0m0.451s 00:05:19.941 sys 0m0.672s 00:05:19.941 13:04:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.941 13:04:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:19.941 ************************************ 00:05:19.941 END TEST dm_mount 00:05:19.941 ************************************ 00:05:19.941 13:04:16 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:19.941 13:04:16 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:19.941 13:04:16 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:19.941 13:04:16 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:19.941 13:04:16 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:19.941 13:04:16 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:19.941 13:04:16 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:20.199 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:20.199 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:20.199 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:20.199 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:20.199 13:04:16 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:20.199 13:04:16 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.199 13:04:16 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:20.199 13:04:16 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:20.199 13:04:16 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:20.199 13:04:16 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:20.199 13:04:16 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:20.199 ************************************ 00:05:20.199 END TEST devices 00:05:20.199 ************************************ 00:05:20.199 00:05:20.199 real 0m9.664s 00:05:20.199 user 0m1.752s 00:05:20.199 sys 0m2.317s 00:05:20.199 13:04:16 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:20.199 13:04:16 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:20.458 ************************************ 00:05:20.458 END TEST setup.sh 00:05:20.458 ************************************ 00:05:20.458 00:05:20.458 real 0m21.677s 00:05:20.458 user 0m6.944s 00:05:20.458 sys 0m9.025s 00:05:20.458 13:04:16 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:20.458 13:04:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:20.458 13:04:16 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:21.025 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.025 Hugepages 00:05:21.025 node hugesize free / total 00:05:21.025 node0 1048576kB 0 / 0 00:05:21.025 node0 2048kB 2048 / 2048 00:05:21.025 00:05:21.025 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:21.025 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:21.025 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:21.283 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:21.283 13:04:17 -- spdk/autotest.sh@130 -- # uname -s 00:05:21.283 13:04:17 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:21.283 13:04:17 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:21.283 13:04:17 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:21.848 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.848 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:22.106 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:22.106 13:04:18 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:23.066 13:04:19 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:23.066 13:04:19 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:23.066 13:04:19 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:23.066 13:04:19 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:23.066 13:04:19 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:23.066 13:04:19 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:23.066 13:04:19 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:23.066 13:04:19 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:23.066 13:04:19 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:23.066 13:04:19 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:05:23.066 13:04:19 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:23.066 13:04:19 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:23.364 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.364 Waiting for block devices as requested 00:05:23.623 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:23.623 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:23.623 13:04:20 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:23.623 13:04:20 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:23.623 13:04:20 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:23.623 13:04:20 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:05:23.623 13:04:20 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:23.623 13:04:20 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:23.623 13:04:20 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:23.623 13:04:20 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:05:23.623 13:04:20 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:05:23.623 13:04:20 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:05:23.623 13:04:20 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:23.623 13:04:20 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:23.623 13:04:20 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:05:23.623 13:04:20 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:23.623 13:04:20 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:23.623 13:04:20 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:23.623 13:04:20 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:05:23.623 13:04:20 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:23.623 13:04:20 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:23.623 13:04:20 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:23.623 13:04:20 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:23.623 13:04:20 -- common/autotest_common.sh@1553 -- # continue 00:05:23.623 13:04:20 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:23.623 13:04:20 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:23.623 13:04:20 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:23.623 13:04:20 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:05:23.623 13:04:20 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:23.623 13:04:20 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:23.623 13:04:20 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:23.623 13:04:20 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:23.623 13:04:20 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:23.623 13:04:20 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:23.623 13:04:20 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:23.623 13:04:20 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:23.623 13:04:20 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:23.623 13:04:20 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:23.623 13:04:20 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:23.623 13:04:20 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:23.623 13:04:20 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:23.623 13:04:20 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:23.623 13:04:20 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:23.882 13:04:20 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:23.882 13:04:20 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:23.882 13:04:20 -- common/autotest_common.sh@1553 -- # continue 00:05:23.882 13:04:20 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:23.882 13:04:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.882 13:04:20 -- common/autotest_common.sh@10 -- # set +x 00:05:23.882 13:04:20 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:23.882 13:04:20 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:23.882 13:04:20 -- common/autotest_common.sh@10 -- # set +x 00:05:23.882 13:04:20 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.448 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.448 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.707 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.707 13:04:21 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:24.707 13:04:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.707 13:04:21 -- common/autotest_common.sh@10 -- # set +x 00:05:24.707 13:04:21 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:24.707 13:04:21 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:24.708 13:04:21 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:24.708 13:04:21 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:24.708 13:04:21 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:24.708 13:04:21 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:24.708 13:04:21 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:24.708 13:04:21 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:24.708 13:04:21 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:24.708 13:04:21 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:24.708 13:04:21 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:24.708 13:04:21 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:05:24.708 13:04:21 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:24.708 13:04:21 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:24.708 13:04:21 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:24.708 13:04:21 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:24.708 13:04:21 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:24.708 13:04:21 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:24.708 13:04:21 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:24.708 13:04:21 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:24.708 13:04:21 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:24.708 13:04:21 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:05:24.708 13:04:21 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:05:24.708 13:04:21 -- common/autotest_common.sh@1589 -- # return 0 00:05:24.708 13:04:21 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:24.708 13:04:21 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:24.708 13:04:21 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:24.708 13:04:21 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:24.708 13:04:21 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:24.708 13:04:21 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:24.708 13:04:21 -- common/autotest_common.sh@10 -- # set +x 00:05:24.708 13:04:21 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:24.708 13:04:21 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:24.708 13:04:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.708 13:04:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.708 13:04:21 -- common/autotest_common.sh@10 -- # set +x 00:05:24.708 ************************************ 00:05:24.708 START TEST env 00:05:24.708 ************************************ 00:05:24.708 13:04:21 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:24.967 * Looking for test storage... 00:05:24.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:24.967 13:04:21 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:24.967 13:04:21 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.967 13:04:21 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.967 13:04:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.967 ************************************ 00:05:24.967 START TEST env_memory 00:05:24.967 ************************************ 00:05:24.967 13:04:21 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:24.967 00:05:24.967 00:05:24.967 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.967 http://cunit.sourceforge.net/ 00:05:24.967 00:05:24.967 00:05:24.967 Suite: memory 00:05:24.967 Test: alloc and free memory map ...[2024-07-15 13:04:21.517753] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:24.967 passed 00:05:24.967 Test: mem map translation ...[2024-07-15 13:04:21.549620] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:24.967 [2024-07-15 13:04:21.549993] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:24.967 [2024-07-15 13:04:21.550503] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:24.967 [2024-07-15 13:04:21.550857] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:24.967 passed 00:05:24.967 Test: mem map registration ...[2024-07-15 13:04:21.615527] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:24.967 [2024-07-15 13:04:21.615885] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:24.967 passed 00:05:24.967 Test: mem map adjacent registrations ...passed 00:05:24.967 00:05:24.967 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.967 suites 1 1 n/a 0 0 00:05:24.967 tests 4 4 4 0 0 00:05:24.967 asserts 152 152 152 0 n/a 00:05:24.967 00:05:24.967 Elapsed time = 0.214 seconds 00:05:24.967 00:05:24.967 real 0m0.236s 00:05:24.967 user 0m0.218s 00:05:25.226 sys 0m0.011s 00:05:25.226 13:04:21 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.226 13:04:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:25.226 ************************************ 00:05:25.226 END TEST env_memory 00:05:25.226 ************************************ 00:05:25.226 13:04:21 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:25.226 13:04:21 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.226 13:04:21 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.226 13:04:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.226 ************************************ 00:05:25.226 START TEST env_vtophys 00:05:25.226 ************************************ 00:05:25.226 13:04:21 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:25.226 EAL: lib.eal log level changed from notice to debug 00:05:25.226 EAL: Detected lcore 0 as core 0 on socket 0 00:05:25.226 EAL: Detected lcore 1 as core 0 on socket 0 00:05:25.226 EAL: Detected lcore 2 as core 0 on socket 0 00:05:25.226 EAL: Detected lcore 3 as core 0 on socket 0 00:05:25.226 EAL: Detected lcore 4 as core 0 on socket 0 00:05:25.226 EAL: Detected lcore 5 as core 0 on socket 0 00:05:25.226 EAL: Detected lcore 6 as core 0 on socket 0 00:05:25.226 EAL: Detected lcore 7 as core 0 on socket 0 00:05:25.226 EAL: Detected lcore 8 as core 0 on socket 0 00:05:25.226 EAL: Detected lcore 9 as core 0 on socket 0 00:05:25.226 EAL: Maximum logical cores by configuration: 128 00:05:25.226 EAL: Detected CPU lcores: 10 00:05:25.226 EAL: Detected NUMA nodes: 1 00:05:25.226 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:25.226 EAL: Detected shared linkage of DPDK 00:05:25.226 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:25.226 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:25.226 EAL: Registered [vdev] bus. 00:05:25.226 EAL: bus.vdev log level changed from disabled to notice 00:05:25.226 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:25.226 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:25.226 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:25.226 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:25.226 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:25.226 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:25.226 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:25.226 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:25.226 EAL: No shared files mode enabled, IPC will be disabled 00:05:25.226 EAL: No shared files mode enabled, IPC is disabled 00:05:25.226 EAL: Selected IOVA mode 'PA' 00:05:25.226 EAL: Probing VFIO support... 00:05:25.226 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:25.226 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:25.226 EAL: Ask a virtual area of 0x2e000 bytes 00:05:25.226 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:25.226 EAL: Setting up physically contiguous memory... 00:05:25.226 EAL: Setting maximum number of open files to 524288 00:05:25.226 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:25.226 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:25.226 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.226 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:25.226 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.226 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.226 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:25.226 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:25.226 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.226 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:25.226 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.226 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.226 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:25.226 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:25.226 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.226 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:25.226 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.226 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.226 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:25.226 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:25.226 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.226 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:25.226 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.226 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.226 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:25.226 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:25.226 EAL: Hugepages will be freed exactly as allocated. 00:05:25.226 EAL: No shared files mode enabled, IPC is disabled 00:05:25.226 EAL: No shared files mode enabled, IPC is disabled 00:05:25.226 EAL: TSC frequency is ~2200000 KHz 00:05:25.226 EAL: Main lcore 0 is ready (tid=7f7820711a00;cpuset=[0]) 00:05:25.226 EAL: Trying to obtain current memory policy. 00:05:25.226 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.227 EAL: Restoring previous memory policy: 0 00:05:25.227 EAL: request: mp_malloc_sync 00:05:25.227 EAL: No shared files mode enabled, IPC is disabled 00:05:25.227 EAL: Heap on socket 0 was expanded by 2MB 00:05:25.227 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:25.227 EAL: No shared files mode enabled, IPC is disabled 00:05:25.227 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:25.227 EAL: Mem event callback 'spdk:(nil)' registered 00:05:25.227 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:25.227 00:05:25.227 00:05:25.227 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.227 http://cunit.sourceforge.net/ 00:05:25.227 00:05:25.227 00:05:25.227 Suite: components_suite 00:05:25.227 Test: vtophys_malloc_test ...passed 00:05:25.227 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:25.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.227 EAL: Restoring previous memory policy: 4 00:05:25.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.227 EAL: request: mp_malloc_sync 00:05:25.227 EAL: No shared files mode enabled, IPC is disabled 00:05:25.227 EAL: Heap on socket 0 was expanded by 4MB 00:05:25.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.227 EAL: request: mp_malloc_sync 00:05:25.227 EAL: No shared files mode enabled, IPC is disabled 00:05:25.227 EAL: Heap on socket 0 was shrunk by 4MB 00:05:25.227 EAL: Trying to obtain current memory policy. 00:05:25.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.227 EAL: Restoring previous memory policy: 4 00:05:25.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.227 EAL: request: mp_malloc_sync 00:05:25.227 EAL: No shared files mode enabled, IPC is disabled 00:05:25.227 EAL: Heap on socket 0 was expanded by 6MB 00:05:25.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.227 EAL: request: mp_malloc_sync 00:05:25.227 EAL: No shared files mode enabled, IPC is disabled 00:05:25.227 EAL: Heap on socket 0 was shrunk by 6MB 00:05:25.227 EAL: Trying to obtain current memory policy. 00:05:25.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.227 EAL: Restoring previous memory policy: 4 00:05:25.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.227 EAL: request: mp_malloc_sync 00:05:25.227 EAL: No shared files mode enabled, IPC is disabled 00:05:25.227 EAL: Heap on socket 0 was expanded by 10MB 00:05:25.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.227 EAL: request: mp_malloc_sync 00:05:25.227 EAL: No shared files mode enabled, IPC is disabled 00:05:25.227 EAL: Heap on socket 0 was shrunk by 10MB 00:05:25.227 EAL: Trying to obtain current memory policy. 00:05:25.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.227 EAL: Restoring previous memory policy: 4 00:05:25.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.227 EAL: request: mp_malloc_sync 00:05:25.227 EAL: No shared files mode enabled, IPC is disabled 00:05:25.227 EAL: Heap on socket 0 was expanded by 18MB 00:05:25.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.227 EAL: request: mp_malloc_sync 00:05:25.227 EAL: No shared files mode enabled, IPC is disabled 00:05:25.227 EAL: Heap on socket 0 was shrunk by 18MB 00:05:25.227 EAL: Trying to obtain current memory policy. 00:05:25.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.227 EAL: Restoring previous memory policy: 4 00:05:25.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.227 EAL: request: mp_malloc_sync 00:05:25.227 EAL: No shared files mode enabled, IPC is disabled 00:05:25.227 EAL: Heap on socket 0 was expanded by 34MB 00:05:25.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.227 EAL: request: mp_malloc_sync 00:05:25.227 EAL: No shared files mode enabled, IPC is disabled 00:05:25.227 EAL: Heap on socket 0 was shrunk by 34MB 00:05:25.227 EAL: Trying to obtain current memory policy. 00:05:25.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.227 EAL: Restoring previous memory policy: 4 00:05:25.227 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.227 EAL: request: mp_malloc_sync 00:05:25.227 EAL: No shared files mode enabled, IPC is disabled 00:05:25.227 EAL: Heap on socket 0 was expanded by 66MB 00:05:25.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.485 EAL: request: mp_malloc_sync 00:05:25.485 EAL: No shared files mode enabled, IPC is disabled 00:05:25.485 EAL: Heap on socket 0 was shrunk by 66MB 00:05:25.485 EAL: Trying to obtain current memory policy. 00:05:25.485 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.485 EAL: Restoring previous memory policy: 4 00:05:25.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.485 EAL: request: mp_malloc_sync 00:05:25.485 EAL: No shared files mode enabled, IPC is disabled 00:05:25.485 EAL: Heap on socket 0 was expanded by 130MB 00:05:25.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.485 EAL: request: mp_malloc_sync 00:05:25.485 EAL: No shared files mode enabled, IPC is disabled 00:05:25.485 EAL: Heap on socket 0 was shrunk by 130MB 00:05:25.485 EAL: Trying to obtain current memory policy. 00:05:25.485 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.485 EAL: Restoring previous memory policy: 4 00:05:25.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.485 EAL: request: mp_malloc_sync 00:05:25.485 EAL: No shared files mode enabled, IPC is disabled 00:05:25.485 EAL: Heap on socket 0 was expanded by 258MB 00:05:25.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.750 EAL: request: mp_malloc_sync 00:05:25.750 EAL: No shared files mode enabled, IPC is disabled 00:05:25.750 EAL: Heap on socket 0 was shrunk by 258MB 00:05:25.750 EAL: Trying to obtain current memory policy. 00:05:25.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.750 EAL: Restoring previous memory policy: 4 00:05:25.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.750 EAL: request: mp_malloc_sync 00:05:25.750 EAL: No shared files mode enabled, IPC is disabled 00:05:25.750 EAL: Heap on socket 0 was expanded by 514MB 00:05:25.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.007 EAL: request: mp_malloc_sync 00:05:26.007 EAL: No shared files mode enabled, IPC is disabled 00:05:26.007 EAL: Heap on socket 0 was shrunk by 514MB 00:05:26.007 EAL: Trying to obtain current memory policy. 00:05:26.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.265 EAL: Restoring previous memory policy: 4 00:05:26.265 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.265 EAL: request: mp_malloc_sync 00:05:26.265 EAL: No shared files mode enabled, IPC is disabled 00:05:26.265 EAL: Heap on socket 0 was expanded by 1026MB 00:05:26.523 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.523 passed 00:05:26.523 00:05:26.523 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.523 suites 1 1 n/a 0 0 00:05:26.523 tests 2 2 2 0 0 00:05:26.523 asserts 5330 5330 5330 0 n/a 00:05:26.523 00:05:26.523 Elapsed time = 1.301 seconds 00:05:26.523 EAL: request: mp_malloc_sync 00:05:26.523 EAL: No shared files mode enabled, IPC is disabled 00:05:26.523 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:26.523 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.523 EAL: request: mp_malloc_sync 00:05:26.523 EAL: No shared files mode enabled, IPC is disabled 00:05:26.523 EAL: Heap on socket 0 was shrunk by 2MB 00:05:26.523 EAL: No shared files mode enabled, IPC is disabled 00:05:26.523 EAL: No shared files mode enabled, IPC is disabled 00:05:26.523 EAL: No shared files mode enabled, IPC is disabled 00:05:26.523 ************************************ 00:05:26.523 END TEST env_vtophys 00:05:26.523 ************************************ 00:05:26.523 00:05:26.523 real 0m1.497s 00:05:26.523 user 0m0.828s 00:05:26.523 sys 0m0.537s 00:05:26.523 13:04:23 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.523 13:04:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:26.782 13:04:23 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:26.782 13:04:23 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.782 13:04:23 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.782 13:04:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.782 ************************************ 00:05:26.782 START TEST env_pci 00:05:26.782 ************************************ 00:05:26.782 13:04:23 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:26.782 00:05:26.782 00:05:26.782 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.782 http://cunit.sourceforge.net/ 00:05:26.782 00:05:26.782 00:05:26.782 Suite: pci 00:05:26.782 Test: pci_hook ...[2024-07-15 13:04:23.321401] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 72612 has claimed it 00:05:26.782 passed 00:05:26.782 00:05:26.782 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.782 suites 1 1 n/a 0 0 00:05:26.782 tests 1 1 1 0 0 00:05:26.782 asserts 25 25 25 0 n/a 00:05:26.782 00:05:26.782 Elapsed time = 0.002 seconds 00:05:26.782 EAL: Cannot find device (10000:00:01.0) 00:05:26.782 EAL: Failed to attach device on primary process 00:05:26.782 00:05:26.782 real 0m0.019s 00:05:26.782 user 0m0.010s 00:05:26.782 sys 0m0.009s 00:05:26.782 13:04:23 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.782 ************************************ 00:05:26.782 END TEST env_pci 00:05:26.782 ************************************ 00:05:26.782 13:04:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:26.782 13:04:23 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:26.782 13:04:23 env -- env/env.sh@15 -- # uname 00:05:26.782 13:04:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:26.782 13:04:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:26.782 13:04:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:26.782 13:04:23 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:26.782 13:04:23 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.782 13:04:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.782 ************************************ 00:05:26.782 START TEST env_dpdk_post_init 00:05:26.782 ************************************ 00:05:26.782 13:04:23 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:26.782 EAL: Detected CPU lcores: 10 00:05:26.782 EAL: Detected NUMA nodes: 1 00:05:26.782 EAL: Detected shared linkage of DPDK 00:05:26.782 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:26.782 EAL: Selected IOVA mode 'PA' 00:05:26.782 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:27.040 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:27.040 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:27.040 Starting DPDK initialization... 00:05:27.040 Starting SPDK post initialization... 00:05:27.040 SPDK NVMe probe 00:05:27.040 Attaching to 0000:00:10.0 00:05:27.040 Attaching to 0000:00:11.0 00:05:27.040 Attached to 0000:00:10.0 00:05:27.040 Attached to 0000:00:11.0 00:05:27.040 Cleaning up... 00:05:27.040 00:05:27.040 real 0m0.170s 00:05:27.040 user 0m0.038s 00:05:27.040 sys 0m0.032s 00:05:27.040 13:04:23 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.041 13:04:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:27.041 ************************************ 00:05:27.041 END TEST env_dpdk_post_init 00:05:27.041 ************************************ 00:05:27.041 13:04:23 env -- env/env.sh@26 -- # uname 00:05:27.041 13:04:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:27.041 13:04:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:27.041 13:04:23 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.041 13:04:23 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.041 13:04:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.041 ************************************ 00:05:27.041 START TEST env_mem_callbacks 00:05:27.041 ************************************ 00:05:27.041 13:04:23 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:27.041 EAL: Detected CPU lcores: 10 00:05:27.041 EAL: Detected NUMA nodes: 1 00:05:27.041 EAL: Detected shared linkage of DPDK 00:05:27.041 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:27.041 EAL: Selected IOVA mode 'PA' 00:05:27.041 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:27.041 00:05:27.041 00:05:27.041 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.041 http://cunit.sourceforge.net/ 00:05:27.041 00:05:27.041 00:05:27.041 Suite: memory 00:05:27.041 Test: test ... 00:05:27.041 register 0x200000200000 2097152 00:05:27.041 malloc 3145728 00:05:27.041 register 0x200000400000 4194304 00:05:27.041 buf 0x200000500000 len 3145728 PASSED 00:05:27.041 malloc 64 00:05:27.041 buf 0x2000004fff40 len 64 PASSED 00:05:27.041 malloc 4194304 00:05:27.041 register 0x200000800000 6291456 00:05:27.041 buf 0x200000a00000 len 4194304 PASSED 00:05:27.041 free 0x200000500000 3145728 00:05:27.041 free 0x2000004fff40 64 00:05:27.041 unregister 0x200000400000 4194304 PASSED 00:05:27.041 free 0x200000a00000 4194304 00:05:27.041 unregister 0x200000800000 6291456 PASSED 00:05:27.041 malloc 8388608 00:05:27.041 register 0x200000400000 10485760 00:05:27.041 buf 0x200000600000 len 8388608 PASSED 00:05:27.041 free 0x200000600000 8388608 00:05:27.041 unregister 0x200000400000 10485760 PASSED 00:05:27.041 passed 00:05:27.041 00:05:27.041 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.041 suites 1 1 n/a 0 0 00:05:27.041 tests 1 1 1 0 0 00:05:27.041 asserts 15 15 15 0 n/a 00:05:27.041 00:05:27.041 Elapsed time = 0.006 seconds 00:05:27.041 00:05:27.041 real 0m0.141s 00:05:27.041 user 0m0.015s 00:05:27.041 sys 0m0.025s 00:05:27.041 13:04:23 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.041 13:04:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:27.041 ************************************ 00:05:27.041 END TEST env_mem_callbacks 00:05:27.041 ************************************ 00:05:27.299 00:05:27.299 real 0m2.406s 00:05:27.299 user 0m1.234s 00:05:27.299 sys 0m0.817s 00:05:27.299 13:04:23 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.299 13:04:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.299 ************************************ 00:05:27.299 END TEST env 00:05:27.299 ************************************ 00:05:27.299 13:04:23 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:27.299 13:04:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.299 13:04:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.299 13:04:23 -- common/autotest_common.sh@10 -- # set +x 00:05:27.299 ************************************ 00:05:27.299 START TEST rpc 00:05:27.299 ************************************ 00:05:27.299 13:04:23 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:27.299 * Looking for test storage... 00:05:27.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:27.299 13:04:23 rpc -- rpc/rpc.sh@65 -- # spdk_pid=72723 00:05:27.299 13:04:23 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:27.299 13:04:23 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.299 13:04:23 rpc -- rpc/rpc.sh@67 -- # waitforlisten 72723 00:05:27.299 13:04:23 rpc -- common/autotest_common.sh@827 -- # '[' -z 72723 ']' 00:05:27.299 13:04:23 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.299 13:04:23 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:27.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.299 13:04:23 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.299 13:04:23 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:27.299 13:04:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.299 [2024-07-15 13:04:23.985965] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:27.299 [2024-07-15 13:04:23.986088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72723 ] 00:05:27.558 [2024-07-15 13:04:24.124026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.558 [2024-07-15 13:04:24.212895] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:27.558 [2024-07-15 13:04:24.212962] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 72723' to capture a snapshot of events at runtime. 00:05:27.558 [2024-07-15 13:04:24.212989] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:27.558 [2024-07-15 13:04:24.213014] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:27.558 [2024-07-15 13:04:24.213021] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid72723 for offline analysis/debug. 00:05:27.558 [2024-07-15 13:04:24.213050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.514 13:04:24 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:28.514 13:04:24 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:28.514 13:04:24 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:28.514 13:04:24 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:28.514 13:04:24 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:28.514 13:04:24 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:28.514 13:04:24 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:28.514 13:04:24 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.514 13:04:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.514 ************************************ 00:05:28.514 START TEST rpc_integrity 00:05:28.514 ************************************ 00:05:28.514 13:04:24 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:28.514 13:04:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:28.514 13:04:24 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.514 13:04:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.514 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.514 13:04:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:28.514 13:04:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:28.514 13:04:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:28.514 13:04:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:28.514 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.514 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.514 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.514 13:04:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:28.514 13:04:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:28.514 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.514 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.514 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.514 13:04:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:28.514 { 00:05:28.514 "aliases": [ 00:05:28.514 "40634871-8b91-4192-a7e0-1d8bc0806e20" 00:05:28.514 ], 00:05:28.514 "assigned_rate_limits": { 00:05:28.514 "r_mbytes_per_sec": 0, 00:05:28.514 "rw_ios_per_sec": 0, 00:05:28.514 "rw_mbytes_per_sec": 0, 00:05:28.514 "w_mbytes_per_sec": 0 00:05:28.514 }, 00:05:28.514 "block_size": 512, 00:05:28.514 "claimed": false, 00:05:28.514 "driver_specific": {}, 00:05:28.514 "memory_domains": [ 00:05:28.514 { 00:05:28.514 "dma_device_id": "system", 00:05:28.514 "dma_device_type": 1 00:05:28.514 }, 00:05:28.514 { 00:05:28.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:28.514 "dma_device_type": 2 00:05:28.514 } 00:05:28.514 ], 00:05:28.514 "name": "Malloc0", 00:05:28.514 "num_blocks": 16384, 00:05:28.514 "product_name": "Malloc disk", 00:05:28.514 "supported_io_types": { 00:05:28.514 "abort": true, 00:05:28.514 "compare": false, 00:05:28.514 "compare_and_write": false, 00:05:28.514 "flush": true, 00:05:28.514 "nvme_admin": false, 00:05:28.514 "nvme_io": false, 00:05:28.514 "read": true, 00:05:28.514 "reset": true, 00:05:28.514 "unmap": true, 00:05:28.514 "write": true, 00:05:28.514 "write_zeroes": true 00:05:28.514 }, 00:05:28.514 "uuid": "40634871-8b91-4192-a7e0-1d8bc0806e20", 00:05:28.514 "zoned": false 00:05:28.514 } 00:05:28.514 ]' 00:05:28.515 13:04:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:28.515 13:04:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:28.515 13:04:25 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:28.515 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.515 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.515 [2024-07-15 13:04:25.135306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:28.515 [2024-07-15 13:04:25.135351] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:28.515 [2024-07-15 13:04:25.135368] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcd7f10 00:05:28.515 [2024-07-15 13:04:25.135377] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:28.515 [2024-07-15 13:04:25.136889] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:28.515 [2024-07-15 13:04:25.136939] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:28.515 Passthru0 00:05:28.515 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.515 13:04:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:28.515 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.515 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.515 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.515 13:04:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:28.515 { 00:05:28.515 "aliases": [ 00:05:28.515 "40634871-8b91-4192-a7e0-1d8bc0806e20" 00:05:28.515 ], 00:05:28.515 "assigned_rate_limits": { 00:05:28.515 "r_mbytes_per_sec": 0, 00:05:28.515 "rw_ios_per_sec": 0, 00:05:28.515 "rw_mbytes_per_sec": 0, 00:05:28.515 "w_mbytes_per_sec": 0 00:05:28.515 }, 00:05:28.515 "block_size": 512, 00:05:28.515 "claim_type": "exclusive_write", 00:05:28.515 "claimed": true, 00:05:28.515 "driver_specific": {}, 00:05:28.515 "memory_domains": [ 00:05:28.515 { 00:05:28.515 "dma_device_id": "system", 00:05:28.515 "dma_device_type": 1 00:05:28.515 }, 00:05:28.515 { 00:05:28.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:28.515 "dma_device_type": 2 00:05:28.515 } 00:05:28.515 ], 00:05:28.515 "name": "Malloc0", 00:05:28.515 "num_blocks": 16384, 00:05:28.515 "product_name": "Malloc disk", 00:05:28.515 "supported_io_types": { 00:05:28.515 "abort": true, 00:05:28.515 "compare": false, 00:05:28.515 "compare_and_write": false, 00:05:28.515 "flush": true, 00:05:28.515 "nvme_admin": false, 00:05:28.515 "nvme_io": false, 00:05:28.515 "read": true, 00:05:28.515 "reset": true, 00:05:28.515 "unmap": true, 00:05:28.515 "write": true, 00:05:28.515 "write_zeroes": true 00:05:28.515 }, 00:05:28.515 "uuid": "40634871-8b91-4192-a7e0-1d8bc0806e20", 00:05:28.515 "zoned": false 00:05:28.515 }, 00:05:28.515 { 00:05:28.515 "aliases": [ 00:05:28.515 "6d01104b-226f-5ebb-ac35-626dfa3bee9c" 00:05:28.515 ], 00:05:28.515 "assigned_rate_limits": { 00:05:28.515 "r_mbytes_per_sec": 0, 00:05:28.515 "rw_ios_per_sec": 0, 00:05:28.515 "rw_mbytes_per_sec": 0, 00:05:28.515 "w_mbytes_per_sec": 0 00:05:28.515 }, 00:05:28.515 "block_size": 512, 00:05:28.515 "claimed": false, 00:05:28.515 "driver_specific": { 00:05:28.515 "passthru": { 00:05:28.515 "base_bdev_name": "Malloc0", 00:05:28.515 "name": "Passthru0" 00:05:28.515 } 00:05:28.515 }, 00:05:28.515 "memory_domains": [ 00:05:28.515 { 00:05:28.515 "dma_device_id": "system", 00:05:28.515 "dma_device_type": 1 00:05:28.515 }, 00:05:28.515 { 00:05:28.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:28.515 "dma_device_type": 2 00:05:28.515 } 00:05:28.515 ], 00:05:28.515 "name": "Passthru0", 00:05:28.515 "num_blocks": 16384, 00:05:28.515 "product_name": "passthru", 00:05:28.515 "supported_io_types": { 00:05:28.515 "abort": true, 00:05:28.515 "compare": false, 00:05:28.515 "compare_and_write": false, 00:05:28.515 "flush": true, 00:05:28.515 "nvme_admin": false, 00:05:28.515 "nvme_io": false, 00:05:28.515 "read": true, 00:05:28.515 "reset": true, 00:05:28.515 "unmap": true, 00:05:28.515 "write": true, 00:05:28.515 "write_zeroes": true 00:05:28.515 }, 00:05:28.515 "uuid": "6d01104b-226f-5ebb-ac35-626dfa3bee9c", 00:05:28.515 "zoned": false 00:05:28.515 } 00:05:28.515 ]' 00:05:28.515 13:04:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:28.515 13:04:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:28.515 13:04:25 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:28.515 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.515 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.515 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.515 13:04:25 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:28.515 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.515 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.515 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.515 13:04:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:28.515 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.515 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.515 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.515 13:04:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:28.515 13:04:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:28.772 13:04:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:28.772 00:05:28.772 real 0m0.317s 00:05:28.772 user 0m0.206s 00:05:28.772 sys 0m0.037s 00:05:28.772 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.772 ************************************ 00:05:28.772 END TEST rpc_integrity 00:05:28.772 ************************************ 00:05:28.772 13:04:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.772 13:04:25 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:28.772 13:04:25 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:28.772 13:04:25 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.772 13:04:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.772 ************************************ 00:05:28.772 START TEST rpc_plugins 00:05:28.772 ************************************ 00:05:28.772 13:04:25 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:28.772 13:04:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:28.772 13:04:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.772 13:04:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:28.772 13:04:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.772 13:04:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:28.772 13:04:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:28.772 13:04:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.772 13:04:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:28.772 13:04:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.772 13:04:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:28.772 { 00:05:28.772 "aliases": [ 00:05:28.772 "d1ee847f-21b7-4015-a761-2ce3f4044fcd" 00:05:28.772 ], 00:05:28.772 "assigned_rate_limits": { 00:05:28.772 "r_mbytes_per_sec": 0, 00:05:28.772 "rw_ios_per_sec": 0, 00:05:28.772 "rw_mbytes_per_sec": 0, 00:05:28.772 "w_mbytes_per_sec": 0 00:05:28.772 }, 00:05:28.772 "block_size": 4096, 00:05:28.772 "claimed": false, 00:05:28.772 "driver_specific": {}, 00:05:28.772 "memory_domains": [ 00:05:28.772 { 00:05:28.772 "dma_device_id": "system", 00:05:28.772 "dma_device_type": 1 00:05:28.772 }, 00:05:28.772 { 00:05:28.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:28.772 "dma_device_type": 2 00:05:28.772 } 00:05:28.772 ], 00:05:28.772 "name": "Malloc1", 00:05:28.772 "num_blocks": 256, 00:05:28.772 "product_name": "Malloc disk", 00:05:28.772 "supported_io_types": { 00:05:28.772 "abort": true, 00:05:28.772 "compare": false, 00:05:28.772 "compare_and_write": false, 00:05:28.772 "flush": true, 00:05:28.772 "nvme_admin": false, 00:05:28.772 "nvme_io": false, 00:05:28.772 "read": true, 00:05:28.772 "reset": true, 00:05:28.772 "unmap": true, 00:05:28.772 "write": true, 00:05:28.772 "write_zeroes": true 00:05:28.772 }, 00:05:28.772 "uuid": "d1ee847f-21b7-4015-a761-2ce3f4044fcd", 00:05:28.772 "zoned": false 00:05:28.772 } 00:05:28.772 ]' 00:05:28.772 13:04:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:28.772 13:04:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:28.772 13:04:25 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:28.772 13:04:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.772 13:04:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:28.772 13:04:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.772 13:04:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:28.772 13:04:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.772 13:04:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:28.772 13:04:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.772 13:04:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:28.772 13:04:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:28.772 13:04:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:28.772 00:05:28.772 real 0m0.155s 00:05:28.772 user 0m0.098s 00:05:28.772 sys 0m0.022s 00:05:28.772 13:04:25 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.772 ************************************ 00:05:28.772 13:04:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:28.772 END TEST rpc_plugins 00:05:28.772 ************************************ 00:05:29.029 13:04:25 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:29.029 13:04:25 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.029 13:04:25 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.029 13:04:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.029 ************************************ 00:05:29.029 START TEST rpc_trace_cmd_test 00:05:29.029 ************************************ 00:05:29.029 13:04:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:29.029 13:04:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:29.029 13:04:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:29.029 13:04:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.029 13:04:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:29.029 13:04:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.029 13:04:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:29.029 "bdev": { 00:05:29.029 "mask": "0x8", 00:05:29.029 "tpoint_mask": "0xffffffffffffffff" 00:05:29.029 }, 00:05:29.029 "bdev_nvme": { 00:05:29.029 "mask": "0x4000", 00:05:29.029 "tpoint_mask": "0x0" 00:05:29.029 }, 00:05:29.029 "blobfs": { 00:05:29.029 "mask": "0x80", 00:05:29.029 "tpoint_mask": "0x0" 00:05:29.029 }, 00:05:29.029 "dsa": { 00:05:29.029 "mask": "0x200", 00:05:29.029 "tpoint_mask": "0x0" 00:05:29.029 }, 00:05:29.029 "ftl": { 00:05:29.029 "mask": "0x40", 00:05:29.029 "tpoint_mask": "0x0" 00:05:29.029 }, 00:05:29.029 "iaa": { 00:05:29.029 "mask": "0x1000", 00:05:29.029 "tpoint_mask": "0x0" 00:05:29.029 }, 00:05:29.029 "iscsi_conn": { 00:05:29.029 "mask": "0x2", 00:05:29.029 "tpoint_mask": "0x0" 00:05:29.029 }, 00:05:29.029 "nvme_pcie": { 00:05:29.029 "mask": "0x800", 00:05:29.029 "tpoint_mask": "0x0" 00:05:29.029 }, 00:05:29.029 "nvme_tcp": { 00:05:29.029 "mask": "0x2000", 00:05:29.029 "tpoint_mask": "0x0" 00:05:29.029 }, 00:05:29.029 "nvmf_rdma": { 00:05:29.029 "mask": "0x10", 00:05:29.029 "tpoint_mask": "0x0" 00:05:29.029 }, 00:05:29.029 "nvmf_tcp": { 00:05:29.029 "mask": "0x20", 00:05:29.029 "tpoint_mask": "0x0" 00:05:29.029 }, 00:05:29.029 "scsi": { 00:05:29.029 "mask": "0x4", 00:05:29.029 "tpoint_mask": "0x0" 00:05:29.029 }, 00:05:29.029 "sock": { 00:05:29.029 "mask": "0x8000", 00:05:29.029 "tpoint_mask": "0x0" 00:05:29.029 }, 00:05:29.029 "thread": { 00:05:29.029 "mask": "0x400", 00:05:29.029 "tpoint_mask": "0x0" 00:05:29.029 }, 00:05:29.029 "tpoint_group_mask": "0x8", 00:05:29.029 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid72723" 00:05:29.029 }' 00:05:29.029 13:04:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:29.029 13:04:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:29.029 13:04:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:29.029 13:04:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:29.029 13:04:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:29.029 13:04:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:29.029 13:04:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:29.286 13:04:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:29.286 13:04:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:29.286 13:04:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:29.286 00:05:29.286 real 0m0.271s 00:05:29.286 user 0m0.238s 00:05:29.286 sys 0m0.023s 00:05:29.286 13:04:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.286 13:04:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:29.286 ************************************ 00:05:29.286 END TEST rpc_trace_cmd_test 00:05:29.286 ************************************ 00:05:29.286 13:04:25 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:29.286 13:04:25 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:29.286 13:04:25 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.286 13:04:25 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.286 13:04:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.286 ************************************ 00:05:29.286 START TEST go_rpc 00:05:29.286 ************************************ 00:05:29.286 13:04:25 rpc.go_rpc -- common/autotest_common.sh@1121 -- # go_rpc 00:05:29.286 13:04:25 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:29.286 13:04:25 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:29.286 13:04:25 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:05:29.286 13:04:25 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:29.286 13:04:25 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:29.286 13:04:25 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.286 13:04:25 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.287 13:04:25 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.287 13:04:25 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:29.287 13:04:25 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:29.287 13:04:25 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["ecc10e64-bfed-4c6b-9e92-a2dfa6df0334"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"ecc10e64-bfed-4c6b-9e92-a2dfa6df0334","zoned":false}]' 00:05:29.287 13:04:25 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:05:29.544 13:04:26 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:29.544 13:04:26 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:29.544 13:04:26 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.544 13:04:26 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.544 13:04:26 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.544 13:04:26 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:29.544 13:04:26 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:29.544 13:04:26 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:05:29.544 13:04:26 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:29.544 00:05:29.544 real 0m0.225s 00:05:29.544 user 0m0.154s 00:05:29.544 sys 0m0.034s 00:05:29.544 13:04:26 rpc.go_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.544 ************************************ 00:05:29.544 END TEST go_rpc 00:05:29.544 13:04:26 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.544 ************************************ 00:05:29.544 13:04:26 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:29.544 13:04:26 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:29.544 13:04:26 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.544 13:04:26 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.544 13:04:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.544 ************************************ 00:05:29.544 START TEST rpc_daemon_integrity 00:05:29.544 ************************************ 00:05:29.544 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:29.544 13:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:29.544 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.544 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.544 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.544 13:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:29.544 13:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:29.544 13:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:29.544 13:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:29.544 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.544 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.544 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.544 13:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:29.544 13:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:29.544 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.544 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.544 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.544 13:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:29.544 { 00:05:29.544 "aliases": [ 00:05:29.544 "3b0a59bd-f44b-4905-9b4a-062ff82753ba" 00:05:29.544 ], 00:05:29.544 "assigned_rate_limits": { 00:05:29.544 "r_mbytes_per_sec": 0, 00:05:29.544 "rw_ios_per_sec": 0, 00:05:29.544 "rw_mbytes_per_sec": 0, 00:05:29.544 "w_mbytes_per_sec": 0 00:05:29.544 }, 00:05:29.544 "block_size": 512, 00:05:29.544 "claimed": false, 00:05:29.544 "driver_specific": {}, 00:05:29.544 "memory_domains": [ 00:05:29.544 { 00:05:29.544 "dma_device_id": "system", 00:05:29.544 "dma_device_type": 1 00:05:29.544 }, 00:05:29.544 { 00:05:29.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.544 "dma_device_type": 2 00:05:29.544 } 00:05:29.544 ], 00:05:29.544 "name": "Malloc3", 00:05:29.544 "num_blocks": 16384, 00:05:29.544 "product_name": "Malloc disk", 00:05:29.544 "supported_io_types": { 00:05:29.544 "abort": true, 00:05:29.544 "compare": false, 00:05:29.544 "compare_and_write": false, 00:05:29.544 "flush": true, 00:05:29.544 "nvme_admin": false, 00:05:29.544 "nvme_io": false, 00:05:29.544 "read": true, 00:05:29.544 "reset": true, 00:05:29.544 "unmap": true, 00:05:29.544 "write": true, 00:05:29.544 "write_zeroes": true 00:05:29.544 }, 00:05:29.544 "uuid": "3b0a59bd-f44b-4905-9b4a-062ff82753ba", 00:05:29.544 "zoned": false 00:05:29.544 } 00:05:29.544 ]' 00:05:29.544 13:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:29.802 13:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:29.802 13:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:29.802 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.803 [2024-07-15 13:04:26.300262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:29.803 [2024-07-15 13:04:26.300309] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:29.803 [2024-07-15 13:04:26.300331] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcd88a0 00:05:29.803 [2024-07-15 13:04:26.300341] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:29.803 [2024-07-15 13:04:26.301858] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:29.803 [2024-07-15 13:04:26.301907] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:29.803 Passthru0 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:29.803 { 00:05:29.803 "aliases": [ 00:05:29.803 "3b0a59bd-f44b-4905-9b4a-062ff82753ba" 00:05:29.803 ], 00:05:29.803 "assigned_rate_limits": { 00:05:29.803 "r_mbytes_per_sec": 0, 00:05:29.803 "rw_ios_per_sec": 0, 00:05:29.803 "rw_mbytes_per_sec": 0, 00:05:29.803 "w_mbytes_per_sec": 0 00:05:29.803 }, 00:05:29.803 "block_size": 512, 00:05:29.803 "claim_type": "exclusive_write", 00:05:29.803 "claimed": true, 00:05:29.803 "driver_specific": {}, 00:05:29.803 "memory_domains": [ 00:05:29.803 { 00:05:29.803 "dma_device_id": "system", 00:05:29.803 "dma_device_type": 1 00:05:29.803 }, 00:05:29.803 { 00:05:29.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.803 "dma_device_type": 2 00:05:29.803 } 00:05:29.803 ], 00:05:29.803 "name": "Malloc3", 00:05:29.803 "num_blocks": 16384, 00:05:29.803 "product_name": "Malloc disk", 00:05:29.803 "supported_io_types": { 00:05:29.803 "abort": true, 00:05:29.803 "compare": false, 00:05:29.803 "compare_and_write": false, 00:05:29.803 "flush": true, 00:05:29.803 "nvme_admin": false, 00:05:29.803 "nvme_io": false, 00:05:29.803 "read": true, 00:05:29.803 "reset": true, 00:05:29.803 "unmap": true, 00:05:29.803 "write": true, 00:05:29.803 "write_zeroes": true 00:05:29.803 }, 00:05:29.803 "uuid": "3b0a59bd-f44b-4905-9b4a-062ff82753ba", 00:05:29.803 "zoned": false 00:05:29.803 }, 00:05:29.803 { 00:05:29.803 "aliases": [ 00:05:29.803 "ce13b351-accf-5745-95d2-706e69049403" 00:05:29.803 ], 00:05:29.803 "assigned_rate_limits": { 00:05:29.803 "r_mbytes_per_sec": 0, 00:05:29.803 "rw_ios_per_sec": 0, 00:05:29.803 "rw_mbytes_per_sec": 0, 00:05:29.803 "w_mbytes_per_sec": 0 00:05:29.803 }, 00:05:29.803 "block_size": 512, 00:05:29.803 "claimed": false, 00:05:29.803 "driver_specific": { 00:05:29.803 "passthru": { 00:05:29.803 "base_bdev_name": "Malloc3", 00:05:29.803 "name": "Passthru0" 00:05:29.803 } 00:05:29.803 }, 00:05:29.803 "memory_domains": [ 00:05:29.803 { 00:05:29.803 "dma_device_id": "system", 00:05:29.803 "dma_device_type": 1 00:05:29.803 }, 00:05:29.803 { 00:05:29.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.803 "dma_device_type": 2 00:05:29.803 } 00:05:29.803 ], 00:05:29.803 "name": "Passthru0", 00:05:29.803 "num_blocks": 16384, 00:05:29.803 "product_name": "passthru", 00:05:29.803 "supported_io_types": { 00:05:29.803 "abort": true, 00:05:29.803 "compare": false, 00:05:29.803 "compare_and_write": false, 00:05:29.803 "flush": true, 00:05:29.803 "nvme_admin": false, 00:05:29.803 "nvme_io": false, 00:05:29.803 "read": true, 00:05:29.803 "reset": true, 00:05:29.803 "unmap": true, 00:05:29.803 "write": true, 00:05:29.803 "write_zeroes": true 00:05:29.803 }, 00:05:29.803 "uuid": "ce13b351-accf-5745-95d2-706e69049403", 00:05:29.803 "zoned": false 00:05:29.803 } 00:05:29.803 ]' 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:29.803 00:05:29.803 real 0m0.313s 00:05:29.803 user 0m0.213s 00:05:29.803 sys 0m0.035s 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.803 ************************************ 00:05:29.803 END TEST rpc_daemon_integrity 00:05:29.803 13:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.803 ************************************ 00:05:29.803 13:04:26 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:29.803 13:04:26 rpc -- rpc/rpc.sh@84 -- # killprocess 72723 00:05:29.803 13:04:26 rpc -- common/autotest_common.sh@946 -- # '[' -z 72723 ']' 00:05:29.803 13:04:26 rpc -- common/autotest_common.sh@950 -- # kill -0 72723 00:05:29.803 13:04:26 rpc -- common/autotest_common.sh@951 -- # uname 00:05:29.803 13:04:26 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:29.803 13:04:26 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72723 00:05:30.062 13:04:26 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:30.062 13:04:26 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:30.062 killing process with pid 72723 00:05:30.062 13:04:26 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72723' 00:05:30.062 13:04:26 rpc -- common/autotest_common.sh@965 -- # kill 72723 00:05:30.062 13:04:26 rpc -- common/autotest_common.sh@970 -- # wait 72723 00:05:30.320 00:05:30.320 real 0m3.092s 00:05:30.320 user 0m4.087s 00:05:30.320 sys 0m0.751s 00:05:30.320 13:04:26 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.320 13:04:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.320 ************************************ 00:05:30.320 END TEST rpc 00:05:30.320 ************************************ 00:05:30.320 13:04:26 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:30.320 13:04:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.320 13:04:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.320 13:04:26 -- common/autotest_common.sh@10 -- # set +x 00:05:30.320 ************************************ 00:05:30.320 START TEST skip_rpc 00:05:30.320 ************************************ 00:05:30.320 13:04:26 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:30.320 * Looking for test storage... 00:05:30.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.320 13:04:27 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:30.320 13:04:27 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:30.320 13:04:27 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:30.320 13:04:27 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.320 13:04:27 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.320 13:04:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.577 ************************************ 00:05:30.577 START TEST skip_rpc 00:05:30.577 ************************************ 00:05:30.577 13:04:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:30.577 13:04:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=72984 00:05:30.577 13:04:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:30.577 13:04:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.577 13:04:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:30.577 [2024-07-15 13:04:27.128843] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:30.577 [2024-07-15 13:04:27.128962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72984 ] 00:05:30.577 [2024-07-15 13:04:27.262725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.835 [2024-07-15 13:04:27.348526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.112 2024/07/15 13:04:32 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 72984 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 72984 ']' 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 72984 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72984 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:36.112 killing process with pid 72984 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72984' 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 72984 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 72984 00:05:36.112 00:05:36.112 real 0m5.431s 00:05:36.112 user 0m5.024s 00:05:36.112 sys 0m0.296s 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.112 13:04:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.112 ************************************ 00:05:36.112 END TEST skip_rpc 00:05:36.112 ************************************ 00:05:36.112 13:04:32 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:36.112 13:04:32 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.112 13:04:32 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.112 13:04:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.112 ************************************ 00:05:36.112 START TEST skip_rpc_with_json 00:05:36.112 ************************************ 00:05:36.112 13:04:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:36.112 13:04:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:36.112 13:04:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=73071 00:05:36.112 13:04:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.112 13:04:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.112 13:04:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 73071 00:05:36.112 13:04:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 73071 ']' 00:05:36.112 13:04:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.112 13:04:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:36.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.112 13:04:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.112 13:04:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:36.112 13:04:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:36.112 [2024-07-15 13:04:32.611529] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:36.112 [2024-07-15 13:04:32.611678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73071 ] 00:05:36.112 [2024-07-15 13:04:32.754328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.112 [2024-07-15 13:04:32.844427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.044 13:04:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:37.044 13:04:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:37.044 13:04:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:37.044 13:04:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.044 13:04:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.044 [2024-07-15 13:04:33.600466] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:37.044 2024/07/15 13:04:33 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:05:37.044 request: 00:05:37.044 { 00:05:37.044 "method": "nvmf_get_transports", 00:05:37.044 "params": { 00:05:37.044 "trtype": "tcp" 00:05:37.044 } 00:05:37.044 } 00:05:37.044 Got JSON-RPC error response 00:05:37.044 GoRPCClient: error on JSON-RPC call 00:05:37.044 13:04:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:37.044 13:04:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:37.044 13:04:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.044 13:04:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.044 [2024-07-15 13:04:33.612560] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.044 13:04:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.044 13:04:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:37.044 13:04:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.044 13:04:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.044 13:04:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.044 13:04:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:37.302 { 00:05:37.302 "subsystems": [ 00:05:37.302 { 00:05:37.302 "subsystem": "keyring", 00:05:37.302 "config": [] 00:05:37.302 }, 00:05:37.302 { 00:05:37.302 "subsystem": "iobuf", 00:05:37.302 "config": [ 00:05:37.302 { 00:05:37.302 "method": "iobuf_set_options", 00:05:37.302 "params": { 00:05:37.302 "large_bufsize": 135168, 00:05:37.302 "large_pool_count": 1024, 00:05:37.302 "small_bufsize": 8192, 00:05:37.302 "small_pool_count": 8192 00:05:37.302 } 00:05:37.302 } 00:05:37.302 ] 00:05:37.302 }, 00:05:37.302 { 00:05:37.302 "subsystem": "sock", 00:05:37.302 "config": [ 00:05:37.302 { 00:05:37.302 "method": "sock_set_default_impl", 00:05:37.302 "params": { 00:05:37.302 "impl_name": "posix" 00:05:37.302 } 00:05:37.302 }, 00:05:37.302 { 00:05:37.302 "method": "sock_impl_set_options", 00:05:37.302 "params": { 00:05:37.302 "enable_ktls": false, 00:05:37.302 "enable_placement_id": 0, 00:05:37.302 "enable_quickack": false, 00:05:37.302 "enable_recv_pipe": true, 00:05:37.302 "enable_zerocopy_send_client": false, 00:05:37.302 "enable_zerocopy_send_server": true, 00:05:37.302 "impl_name": "ssl", 00:05:37.302 "recv_buf_size": 4096, 00:05:37.302 "send_buf_size": 4096, 00:05:37.302 "tls_version": 0, 00:05:37.302 "zerocopy_threshold": 0 00:05:37.302 } 00:05:37.302 }, 00:05:37.302 { 00:05:37.302 "method": "sock_impl_set_options", 00:05:37.302 "params": { 00:05:37.302 "enable_ktls": false, 00:05:37.302 "enable_placement_id": 0, 00:05:37.302 "enable_quickack": false, 00:05:37.302 "enable_recv_pipe": true, 00:05:37.302 "enable_zerocopy_send_client": false, 00:05:37.302 "enable_zerocopy_send_server": true, 00:05:37.302 "impl_name": "posix", 00:05:37.302 "recv_buf_size": 2097152, 00:05:37.302 "send_buf_size": 2097152, 00:05:37.302 "tls_version": 0, 00:05:37.302 "zerocopy_threshold": 0 00:05:37.302 } 00:05:37.302 } 00:05:37.302 ] 00:05:37.302 }, 00:05:37.302 { 00:05:37.302 "subsystem": "vmd", 00:05:37.302 "config": [] 00:05:37.302 }, 00:05:37.302 { 00:05:37.302 "subsystem": "accel", 00:05:37.302 "config": [ 00:05:37.302 { 00:05:37.302 "method": "accel_set_options", 00:05:37.302 "params": { 00:05:37.302 "buf_count": 2048, 00:05:37.302 "large_cache_size": 16, 00:05:37.302 "sequence_count": 2048, 00:05:37.302 "small_cache_size": 128, 00:05:37.302 "task_count": 2048 00:05:37.302 } 00:05:37.302 } 00:05:37.302 ] 00:05:37.302 }, 00:05:37.302 { 00:05:37.302 "subsystem": "bdev", 00:05:37.302 "config": [ 00:05:37.302 { 00:05:37.302 "method": "bdev_set_options", 00:05:37.302 "params": { 00:05:37.302 "bdev_auto_examine": true, 00:05:37.302 "bdev_io_cache_size": 256, 00:05:37.302 "bdev_io_pool_size": 65535, 00:05:37.302 "iobuf_large_cache_size": 16, 00:05:37.302 "iobuf_small_cache_size": 128 00:05:37.302 } 00:05:37.302 }, 00:05:37.302 { 00:05:37.302 "method": "bdev_raid_set_options", 00:05:37.302 "params": { 00:05:37.302 "process_window_size_kb": 1024 00:05:37.302 } 00:05:37.302 }, 00:05:37.302 { 00:05:37.302 "method": "bdev_iscsi_set_options", 00:05:37.302 "params": { 00:05:37.302 "timeout_sec": 30 00:05:37.302 } 00:05:37.302 }, 00:05:37.302 { 00:05:37.302 "method": "bdev_nvme_set_options", 00:05:37.302 "params": { 00:05:37.302 "action_on_timeout": "none", 00:05:37.302 "allow_accel_sequence": false, 00:05:37.302 "arbitration_burst": 0, 00:05:37.302 "bdev_retry_count": 3, 00:05:37.302 "ctrlr_loss_timeout_sec": 0, 00:05:37.302 "delay_cmd_submit": true, 00:05:37.302 "dhchap_dhgroups": [ 00:05:37.302 "null", 00:05:37.302 "ffdhe2048", 00:05:37.302 "ffdhe3072", 00:05:37.302 "ffdhe4096", 00:05:37.302 "ffdhe6144", 00:05:37.302 "ffdhe8192" 00:05:37.302 ], 00:05:37.302 "dhchap_digests": [ 00:05:37.302 "sha256", 00:05:37.302 "sha384", 00:05:37.302 "sha512" 00:05:37.302 ], 00:05:37.302 "disable_auto_failback": false, 00:05:37.302 "fast_io_fail_timeout_sec": 0, 00:05:37.302 "generate_uuids": false, 00:05:37.302 "high_priority_weight": 0, 00:05:37.302 "io_path_stat": false, 00:05:37.302 "io_queue_requests": 0, 00:05:37.302 "keep_alive_timeout_ms": 10000, 00:05:37.302 "low_priority_weight": 0, 00:05:37.302 "medium_priority_weight": 0, 00:05:37.302 "nvme_adminq_poll_period_us": 10000, 00:05:37.302 "nvme_error_stat": false, 00:05:37.302 "nvme_ioq_poll_period_us": 0, 00:05:37.302 "rdma_cm_event_timeout_ms": 0, 00:05:37.302 "rdma_max_cq_size": 0, 00:05:37.302 "rdma_srq_size": 0, 00:05:37.302 "reconnect_delay_sec": 0, 00:05:37.302 "timeout_admin_us": 0, 00:05:37.302 "timeout_us": 0, 00:05:37.302 "transport_ack_timeout": 0, 00:05:37.302 "transport_retry_count": 4, 00:05:37.302 "transport_tos": 0 00:05:37.302 } 00:05:37.302 }, 00:05:37.302 { 00:05:37.302 "method": "bdev_nvme_set_hotplug", 00:05:37.302 "params": { 00:05:37.302 "enable": false, 00:05:37.302 "period_us": 100000 00:05:37.302 } 00:05:37.302 }, 00:05:37.302 { 00:05:37.302 "method": "bdev_wait_for_examine" 00:05:37.302 } 00:05:37.302 ] 00:05:37.302 }, 00:05:37.302 { 00:05:37.302 "subsystem": "scsi", 00:05:37.302 "config": null 00:05:37.302 }, 00:05:37.302 { 00:05:37.302 "subsystem": "scheduler", 00:05:37.302 "config": [ 00:05:37.302 { 00:05:37.302 "method": "framework_set_scheduler", 00:05:37.302 "params": { 00:05:37.302 "name": "static" 00:05:37.302 } 00:05:37.302 } 00:05:37.302 ] 00:05:37.302 }, 00:05:37.302 { 00:05:37.302 "subsystem": "vhost_scsi", 00:05:37.302 "config": [] 00:05:37.302 }, 00:05:37.302 { 00:05:37.302 "subsystem": "vhost_blk", 00:05:37.302 "config": [] 00:05:37.302 }, 00:05:37.302 { 00:05:37.302 "subsystem": "ublk", 00:05:37.302 "config": [] 00:05:37.302 }, 00:05:37.302 { 00:05:37.302 "subsystem": "nbd", 00:05:37.302 "config": [] 00:05:37.302 }, 00:05:37.302 { 00:05:37.302 "subsystem": "nvmf", 00:05:37.302 "config": [ 00:05:37.302 { 00:05:37.302 "method": "nvmf_set_config", 00:05:37.302 "params": { 00:05:37.302 "admin_cmd_passthru": { 00:05:37.302 "identify_ctrlr": false 00:05:37.303 }, 00:05:37.303 "discovery_filter": "match_any" 00:05:37.303 } 00:05:37.303 }, 00:05:37.303 { 00:05:37.303 "method": "nvmf_set_max_subsystems", 00:05:37.303 "params": { 00:05:37.303 "max_subsystems": 1024 00:05:37.303 } 00:05:37.303 }, 00:05:37.303 { 00:05:37.303 "method": "nvmf_set_crdt", 00:05:37.303 "params": { 00:05:37.303 "crdt1": 0, 00:05:37.303 "crdt2": 0, 00:05:37.303 "crdt3": 0 00:05:37.303 } 00:05:37.303 }, 00:05:37.303 { 00:05:37.303 "method": "nvmf_create_transport", 00:05:37.303 "params": { 00:05:37.303 "abort_timeout_sec": 1, 00:05:37.303 "ack_timeout": 0, 00:05:37.303 "buf_cache_size": 4294967295, 00:05:37.303 "c2h_success": true, 00:05:37.303 "data_wr_pool_size": 0, 00:05:37.303 "dif_insert_or_strip": false, 00:05:37.303 "in_capsule_data_size": 4096, 00:05:37.303 "io_unit_size": 131072, 00:05:37.303 "max_aq_depth": 128, 00:05:37.303 "max_io_qpairs_per_ctrlr": 127, 00:05:37.303 "max_io_size": 131072, 00:05:37.303 "max_queue_depth": 128, 00:05:37.303 "num_shared_buffers": 511, 00:05:37.303 "sock_priority": 0, 00:05:37.303 "trtype": "TCP", 00:05:37.303 "zcopy": false 00:05:37.303 } 00:05:37.303 } 00:05:37.303 ] 00:05:37.303 }, 00:05:37.303 { 00:05:37.303 "subsystem": "iscsi", 00:05:37.303 "config": [ 00:05:37.303 { 00:05:37.303 "method": "iscsi_set_options", 00:05:37.303 "params": { 00:05:37.303 "allow_duplicated_isid": false, 00:05:37.303 "chap_group": 0, 00:05:37.303 "data_out_pool_size": 2048, 00:05:37.303 "default_time2retain": 20, 00:05:37.303 "default_time2wait": 2, 00:05:37.303 "disable_chap": false, 00:05:37.303 "error_recovery_level": 0, 00:05:37.303 "first_burst_length": 8192, 00:05:37.303 "immediate_data": true, 00:05:37.303 "immediate_data_pool_size": 16384, 00:05:37.303 "max_connections_per_session": 2, 00:05:37.303 "max_large_datain_per_connection": 64, 00:05:37.303 "max_queue_depth": 64, 00:05:37.303 "max_r2t_per_connection": 4, 00:05:37.303 "max_sessions": 128, 00:05:37.303 "mutual_chap": false, 00:05:37.303 "node_base": "iqn.2016-06.io.spdk", 00:05:37.303 "nop_in_interval": 30, 00:05:37.303 "nop_timeout": 60, 00:05:37.303 "pdu_pool_size": 36864, 00:05:37.303 "require_chap": false 00:05:37.303 } 00:05:37.303 } 00:05:37.303 ] 00:05:37.303 } 00:05:37.303 ] 00:05:37.303 } 00:05:37.303 13:04:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:37.303 13:04:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 73071 00:05:37.303 13:04:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 73071 ']' 00:05:37.303 13:04:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 73071 00:05:37.303 13:04:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:37.303 13:04:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:37.303 13:04:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73071 00:05:37.303 13:04:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:37.303 13:04:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:37.303 killing process with pid 73071 00:05:37.303 13:04:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73071' 00:05:37.303 13:04:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 73071 00:05:37.303 13:04:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 73071 00:05:37.560 13:04:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=73116 00:05:37.560 13:04:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:37.560 13:04:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:42.824 13:04:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 73116 00:05:42.824 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 73116 ']' 00:05:42.824 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 73116 00:05:42.824 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:42.824 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:42.824 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73116 00:05:42.824 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:42.824 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:42.824 killing process with pid 73116 00:05:42.824 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73116' 00:05:42.824 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 73116 00:05:42.824 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 73116 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:43.083 00:05:43.083 real 0m7.061s 00:05:43.083 user 0m6.743s 00:05:43.083 sys 0m0.687s 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.083 ************************************ 00:05:43.083 END TEST skip_rpc_with_json 00:05:43.083 ************************************ 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:43.083 13:04:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:43.083 13:04:39 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.083 13:04:39 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.083 13:04:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.083 ************************************ 00:05:43.083 START TEST skip_rpc_with_delay 00:05:43.083 ************************************ 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.083 [2024-07-15 13:04:39.727981] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:43.083 [2024-07-15 13:04:39.728139] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:43.083 00:05:43.083 real 0m0.087s 00:05:43.083 user 0m0.049s 00:05:43.083 sys 0m0.037s 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.083 13:04:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:43.083 ************************************ 00:05:43.083 END TEST skip_rpc_with_delay 00:05:43.083 ************************************ 00:05:43.083 13:04:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:43.083 13:04:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:43.083 13:04:39 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:43.083 13:04:39 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.083 13:04:39 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.083 13:04:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.083 ************************************ 00:05:43.083 START TEST exit_on_failed_rpc_init 00:05:43.083 ************************************ 00:05:43.083 13:04:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:43.083 13:04:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=73220 00:05:43.083 13:04:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 73220 00:05:43.083 13:04:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 73220 ']' 00:05:43.083 13:04:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.083 13:04:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.083 13:04:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:43.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.083 13:04:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.083 13:04:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:43.083 13:04:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:43.341 [2024-07-15 13:04:39.866612] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:43.341 [2024-07-15 13:04:39.867194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73220 ] 00:05:43.341 [2024-07-15 13:04:40.007775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.600 [2024-07-15 13:04:40.102135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.167 13:04:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:44.167 13:04:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:44.167 13:04:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.167 13:04:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:44.167 13:04:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:44.167 13:04:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:44.167 13:04:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.167 13:04:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.167 13:04:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.167 13:04:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.167 13:04:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.167 13:04:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.167 13:04:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.167 13:04:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:44.167 13:04:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:44.425 [2024-07-15 13:04:40.937371] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:44.425 [2024-07-15 13:04:40.937476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73250 ] 00:05:44.425 [2024-07-15 13:04:41.078315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.683 [2024-07-15 13:04:41.171964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.683 [2024-07-15 13:04:41.172087] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:44.683 [2024-07-15 13:04:41.172105] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:44.683 [2024-07-15 13:04:41.172116] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:44.683 13:04:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:44.683 13:04:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.683 13:04:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:44.683 13:04:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:44.683 13:04:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:44.683 13:04:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.683 13:04:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:44.683 13:04:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 73220 00:05:44.683 13:04:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 73220 ']' 00:05:44.683 13:04:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 73220 00:05:44.683 13:04:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:44.683 13:04:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:44.683 13:04:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73220 00:05:44.683 13:04:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:44.683 13:04:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:44.684 killing process with pid 73220 00:05:44.684 13:04:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73220' 00:05:44.684 13:04:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 73220 00:05:44.684 13:04:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 73220 00:05:44.942 00:05:44.942 real 0m1.874s 00:05:44.942 user 0m2.189s 00:05:44.942 sys 0m0.442s 00:05:44.942 13:04:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.942 13:04:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:44.942 ************************************ 00:05:44.942 END TEST exit_on_failed_rpc_init 00:05:44.942 ************************************ 00:05:45.200 13:04:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:45.200 00:05:45.200 real 0m14.751s 00:05:45.200 user 0m14.112s 00:05:45.200 sys 0m1.644s 00:05:45.200 13:04:41 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.200 13:04:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.200 ************************************ 00:05:45.200 END TEST skip_rpc 00:05:45.200 ************************************ 00:05:45.200 13:04:41 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:45.200 13:04:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.200 13:04:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.200 13:04:41 -- common/autotest_common.sh@10 -- # set +x 00:05:45.200 ************************************ 00:05:45.200 START TEST rpc_client 00:05:45.200 ************************************ 00:05:45.200 13:04:41 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:45.200 * Looking for test storage... 00:05:45.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:45.200 13:04:41 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:45.200 OK 00:05:45.200 13:04:41 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:45.200 00:05:45.200 real 0m0.105s 00:05:45.200 user 0m0.047s 00:05:45.200 sys 0m0.065s 00:05:45.200 13:04:41 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.200 13:04:41 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:45.200 ************************************ 00:05:45.200 END TEST rpc_client 00:05:45.200 ************************************ 00:05:45.200 13:04:41 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:45.200 13:04:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.200 13:04:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.200 13:04:41 -- common/autotest_common.sh@10 -- # set +x 00:05:45.200 ************************************ 00:05:45.200 START TEST json_config 00:05:45.200 ************************************ 00:05:45.200 13:04:41 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:45.460 13:04:41 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:45.460 13:04:41 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:45.460 13:04:41 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.460 13:04:41 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.460 13:04:41 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.460 13:04:41 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.460 13:04:41 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.460 13:04:41 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.460 13:04:41 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.460 13:04:41 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.460 13:04:41 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.460 13:04:41 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.460 13:04:41 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:05:45.460 13:04:41 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:05:45.460 13:04:41 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.460 13:04:41 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.460 13:04:41 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:45.460 13:04:41 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.460 13:04:41 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:45.460 13:04:41 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.460 13:04:42 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.460 13:04:42 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.460 13:04:42 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.460 13:04:42 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.460 13:04:42 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.460 13:04:42 json_config -- paths/export.sh@5 -- # export PATH 00:05:45.460 13:04:42 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.461 13:04:42 json_config -- nvmf/common.sh@47 -- # : 0 00:05:45.461 13:04:42 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:45.461 13:04:42 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:45.461 13:04:42 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.461 13:04:42 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.461 13:04:42 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.461 13:04:42 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:45.461 13:04:42 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:45.461 13:04:42 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:45.461 13:04:42 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:45.461 13:04:42 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:45.461 13:04:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:45.461 13:04:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:45.461 13:04:42 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:45.461 13:04:42 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:45.461 13:04:42 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:45.461 13:04:42 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:45.461 13:04:42 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:45.461 13:04:42 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:45.461 13:04:42 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:45.461 13:04:42 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:45.461 13:04:42 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:45.461 13:04:42 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:45.461 13:04:42 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:45.461 INFO: JSON configuration test init 00:05:45.461 13:04:42 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:45.461 13:04:42 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:45.461 13:04:42 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:45.461 13:04:42 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:45.461 13:04:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.461 13:04:42 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:45.461 13:04:42 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:45.461 13:04:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.461 13:04:42 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:45.461 13:04:42 json_config -- json_config/common.sh@9 -- # local app=target 00:05:45.461 13:04:42 json_config -- json_config/common.sh@10 -- # shift 00:05:45.461 13:04:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:45.461 13:04:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:45.461 13:04:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:45.461 13:04:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.461 13:04:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.461 13:04:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=73378 00:05:45.461 13:04:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:45.461 Waiting for target to run... 00:05:45.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:45.461 13:04:42 json_config -- json_config/common.sh@25 -- # waitforlisten 73378 /var/tmp/spdk_tgt.sock 00:05:45.461 13:04:42 json_config -- common/autotest_common.sh@827 -- # '[' -z 73378 ']' 00:05:45.461 13:04:42 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:45.461 13:04:42 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:45.461 13:04:42 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:45.461 13:04:42 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:45.461 13:04:42 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:45.461 13:04:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.461 [2024-07-15 13:04:42.088811] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:45.461 [2024-07-15 13:04:42.088906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73378 ] 00:05:46.028 [2024-07-15 13:04:42.504899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.028 [2024-07-15 13:04:42.570788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.595 13:04:43 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:46.595 00:05:46.595 13:04:43 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:46.595 13:04:43 json_config -- json_config/common.sh@26 -- # echo '' 00:05:46.595 13:04:43 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:46.595 13:04:43 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:46.595 13:04:43 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:46.595 13:04:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.595 13:04:43 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:46.595 13:04:43 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:46.595 13:04:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:46.595 13:04:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.595 13:04:43 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:46.595 13:04:43 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:46.595 13:04:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:47.160 13:04:43 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:47.160 13:04:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:47.160 13:04:43 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:47.160 13:04:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.160 13:04:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:47.160 13:04:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:47.160 13:04:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:47.160 13:04:43 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:47.160 13:04:43 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:47.160 13:04:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:47.418 13:04:43 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:47.418 13:04:43 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:47.418 13:04:43 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:47.418 13:04:43 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:47.418 13:04:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.418 13:04:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.418 13:04:43 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:47.418 13:04:43 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:47.418 13:04:43 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:47.418 13:04:43 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:47.418 13:04:43 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:47.418 13:04:43 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:47.418 13:04:43 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:47.418 13:04:43 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:47.418 13:04:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.418 13:04:43 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:47.418 13:04:43 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:47.418 13:04:43 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:47.418 13:04:43 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:47.418 13:04:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:47.680 MallocForNvmf0 00:05:47.680 13:04:44 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:47.680 13:04:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:47.937 MallocForNvmf1 00:05:47.937 13:04:44 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:47.937 13:04:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:47.937 [2024-07-15 13:04:44.671350] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:48.195 13:04:44 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:48.195 13:04:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:48.453 13:04:44 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:48.453 13:04:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:48.712 13:04:45 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:48.712 13:04:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:48.970 13:04:45 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:48.970 13:04:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:48.970 [2024-07-15 13:04:45.699900] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:49.227 13:04:45 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:49.227 13:04:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:49.227 13:04:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.227 13:04:45 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:49.227 13:04:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:49.227 13:04:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.227 13:04:45 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:49.227 13:04:45 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:49.227 13:04:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:49.486 MallocBdevForConfigChangeCheck 00:05:49.486 13:04:46 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:49.486 13:04:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:49.486 13:04:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.486 13:04:46 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:49.486 13:04:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:49.744 INFO: shutting down applications... 00:05:49.744 13:04:46 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:49.744 13:04:46 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:49.744 13:04:46 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:49.744 13:04:46 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:49.744 13:04:46 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:50.307 Calling clear_iscsi_subsystem 00:05:50.307 Calling clear_nvmf_subsystem 00:05:50.307 Calling clear_nbd_subsystem 00:05:50.307 Calling clear_ublk_subsystem 00:05:50.307 Calling clear_vhost_blk_subsystem 00:05:50.307 Calling clear_vhost_scsi_subsystem 00:05:50.307 Calling clear_bdev_subsystem 00:05:50.307 13:04:46 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:50.307 13:04:46 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:50.307 13:04:46 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:50.307 13:04:46 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.307 13:04:46 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:50.307 13:04:46 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:50.567 13:04:47 json_config -- json_config/json_config.sh@345 -- # break 00:05:50.567 13:04:47 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:50.567 13:04:47 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:50.567 13:04:47 json_config -- json_config/common.sh@31 -- # local app=target 00:05:50.567 13:04:47 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:50.567 13:04:47 json_config -- json_config/common.sh@35 -- # [[ -n 73378 ]] 00:05:50.567 13:04:47 json_config -- json_config/common.sh@38 -- # kill -SIGINT 73378 00:05:50.567 13:04:47 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:50.567 13:04:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.567 13:04:47 json_config -- json_config/common.sh@41 -- # kill -0 73378 00:05:50.567 13:04:47 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.133 13:04:47 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.133 13:04:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.133 13:04:47 json_config -- json_config/common.sh@41 -- # kill -0 73378 00:05:51.133 13:04:47 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:51.134 13:04:47 json_config -- json_config/common.sh@43 -- # break 00:05:51.134 13:04:47 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:51.134 SPDK target shutdown done 00:05:51.134 13:04:47 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:51.134 INFO: relaunching applications... 00:05:51.134 13:04:47 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:51.134 13:04:47 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:51.134 13:04:47 json_config -- json_config/common.sh@9 -- # local app=target 00:05:51.134 13:04:47 json_config -- json_config/common.sh@10 -- # shift 00:05:51.134 13:04:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:51.134 13:04:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:51.134 13:04:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:51.134 13:04:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.134 13:04:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.134 13:04:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=73648 00:05:51.134 13:04:47 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:51.134 Waiting for target to run... 00:05:51.134 13:04:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:51.134 13:04:47 json_config -- json_config/common.sh@25 -- # waitforlisten 73648 /var/tmp/spdk_tgt.sock 00:05:51.134 13:04:47 json_config -- common/autotest_common.sh@827 -- # '[' -z 73648 ']' 00:05:51.134 13:04:47 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:51.134 13:04:47 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:51.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:51.134 13:04:47 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:51.134 13:04:47 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:51.134 13:04:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.134 [2024-07-15 13:04:47.755571] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:51.134 [2024-07-15 13:04:47.755702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73648 ] 00:05:51.698 [2024-07-15 13:04:48.171373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.698 [2024-07-15 13:04:48.236380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.955 [2024-07-15 13:04:48.542300] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:51.955 [2024-07-15 13:04:48.574342] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:52.213 13:04:48 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:52.213 00:05:52.213 13:04:48 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:52.213 13:04:48 json_config -- json_config/common.sh@26 -- # echo '' 00:05:52.213 13:04:48 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:52.213 INFO: Checking if target configuration is the same... 00:05:52.213 13:04:48 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:52.213 13:04:48 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:52.213 13:04:48 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:52.213 13:04:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:52.213 + '[' 2 -ne 2 ']' 00:05:52.213 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:52.213 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:52.213 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:52.213 +++ basename /dev/fd/62 00:05:52.213 ++ mktemp /tmp/62.XXX 00:05:52.213 + tmp_file_1=/tmp/62.qRz 00:05:52.213 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:52.213 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:52.213 + tmp_file_2=/tmp/spdk_tgt_config.json.668 00:05:52.213 + ret=0 00:05:52.213 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:52.471 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:52.728 + diff -u /tmp/62.qRz /tmp/spdk_tgt_config.json.668 00:05:52.728 INFO: JSON config files are the same 00:05:52.728 + echo 'INFO: JSON config files are the same' 00:05:52.728 + rm /tmp/62.qRz /tmp/spdk_tgt_config.json.668 00:05:52.728 + exit 0 00:05:52.728 13:04:49 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:52.728 INFO: changing configuration and checking if this can be detected... 00:05:52.728 13:04:49 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:52.728 13:04:49 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:52.728 13:04:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:52.985 13:04:49 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:52.985 13:04:49 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:52.985 13:04:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:52.985 + '[' 2 -ne 2 ']' 00:05:52.985 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:52.985 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:52.985 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:52.985 +++ basename /dev/fd/62 00:05:52.985 ++ mktemp /tmp/62.XXX 00:05:52.985 + tmp_file_1=/tmp/62.aDd 00:05:52.985 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:52.985 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:52.985 + tmp_file_2=/tmp/spdk_tgt_config.json.Vez 00:05:52.985 + ret=0 00:05:52.985 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:53.242 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:53.242 + diff -u /tmp/62.aDd /tmp/spdk_tgt_config.json.Vez 00:05:53.242 + ret=1 00:05:53.242 + echo '=== Start of file: /tmp/62.aDd ===' 00:05:53.242 + cat /tmp/62.aDd 00:05:53.242 + echo '=== End of file: /tmp/62.aDd ===' 00:05:53.242 + echo '' 00:05:53.242 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Vez ===' 00:05:53.242 + cat /tmp/spdk_tgt_config.json.Vez 00:05:53.242 + echo '=== End of file: /tmp/spdk_tgt_config.json.Vez ===' 00:05:53.242 + echo '' 00:05:53.242 + rm /tmp/62.aDd /tmp/spdk_tgt_config.json.Vez 00:05:53.242 + exit 1 00:05:53.242 INFO: configuration change detected. 00:05:53.242 13:04:49 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:53.242 13:04:49 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:53.242 13:04:49 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:53.242 13:04:49 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:53.242 13:04:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.242 13:04:49 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:53.242 13:04:49 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:53.242 13:04:49 json_config -- json_config/json_config.sh@317 -- # [[ -n 73648 ]] 00:05:53.242 13:04:49 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:53.242 13:04:49 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:53.242 13:04:49 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:53.242 13:04:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.242 13:04:49 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:53.242 13:04:49 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:53.505 13:04:49 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:53.505 13:04:49 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:53.505 13:04:49 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:53.505 13:04:49 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:53.505 13:04:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.505 13:04:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.505 13:04:50 json_config -- json_config/json_config.sh@323 -- # killprocess 73648 00:05:53.505 13:04:50 json_config -- common/autotest_common.sh@946 -- # '[' -z 73648 ']' 00:05:53.505 13:04:50 json_config -- common/autotest_common.sh@950 -- # kill -0 73648 00:05:53.505 13:04:50 json_config -- common/autotest_common.sh@951 -- # uname 00:05:53.505 13:04:50 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:53.505 13:04:50 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73648 00:05:53.505 13:04:50 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:53.505 13:04:50 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:53.505 13:04:50 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73648' 00:05:53.505 killing process with pid 73648 00:05:53.505 13:04:50 json_config -- common/autotest_common.sh@965 -- # kill 73648 00:05:53.505 13:04:50 json_config -- common/autotest_common.sh@970 -- # wait 73648 00:05:53.779 13:04:50 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:53.779 13:04:50 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:53.779 13:04:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.779 13:04:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.779 13:04:50 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:53.779 INFO: Success 00:05:53.779 13:04:50 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:53.779 00:05:53.779 real 0m8.390s 00:05:53.779 user 0m12.016s 00:05:53.779 sys 0m1.855s 00:05:53.779 13:04:50 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.779 13:04:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.779 ************************************ 00:05:53.779 END TEST json_config 00:05:53.779 ************************************ 00:05:53.779 13:04:50 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:53.779 13:04:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:53.779 13:04:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.779 13:04:50 -- common/autotest_common.sh@10 -- # set +x 00:05:53.779 ************************************ 00:05:53.779 START TEST json_config_extra_key 00:05:53.779 ************************************ 00:05:53.779 13:04:50 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:53.779 13:04:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:53.779 13:04:50 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:53.779 13:04:50 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:53.779 13:04:50 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:53.779 13:04:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.779 13:04:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.779 13:04:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.779 13:04:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:53.779 13:04:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:53.779 13:04:50 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:53.779 13:04:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:53.779 13:04:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:53.779 13:04:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:53.779 13:04:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:53.779 13:04:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:53.779 13:04:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:53.779 13:04:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:53.779 13:04:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:53.779 13:04:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:53.779 13:04:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:53.779 INFO: launching applications... 00:05:53.779 13:04:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:53.779 13:04:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:53.779 13:04:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:53.779 13:04:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:53.779 13:04:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:53.779 13:04:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:53.779 13:04:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:53.779 13:04:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:53.779 13:04:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:53.779 13:04:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=73824 00:05:53.779 Waiting for target to run... 00:05:53.779 13:04:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:53.779 13:04:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 73824 /var/tmp/spdk_tgt.sock 00:05:53.779 13:04:50 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 73824 ']' 00:05:53.779 13:04:50 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:53.779 13:04:50 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:53.779 13:04:50 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:53.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:53.779 13:04:50 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:53.779 13:04:50 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:53.779 13:04:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:53.779 [2024-07-15 13:04:50.502755] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:53.779 [2024-07-15 13:04:50.502884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73824 ] 00:05:54.344 [2024-07-15 13:04:50.936599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.344 [2024-07-15 13:04:51.000755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.910 13:04:51 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:54.910 13:04:51 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:54.910 13:04:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:54.910 00:05:54.910 INFO: shutting down applications... 00:05:54.910 13:04:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:54.910 13:04:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:54.910 13:04:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:54.910 13:04:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:54.910 13:04:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 73824 ]] 00:05:54.910 13:04:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 73824 00:05:54.910 13:04:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:54.910 13:04:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:54.910 13:04:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 73824 00:05:54.910 13:04:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:55.474 13:04:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:55.474 13:04:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:55.474 13:04:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 73824 00:05:55.474 13:04:52 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:55.474 13:04:52 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:55.474 13:04:52 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:55.474 SPDK target shutdown done 00:05:55.474 13:04:52 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:55.474 Success 00:05:55.474 13:04:52 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:55.474 00:05:55.474 real 0m1.671s 00:05:55.474 user 0m1.609s 00:05:55.474 sys 0m0.446s 00:05:55.474 13:04:52 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.474 13:04:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:55.474 ************************************ 00:05:55.474 END TEST json_config_extra_key 00:05:55.474 ************************************ 00:05:55.474 13:04:52 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:55.474 13:04:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:55.474 13:04:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.474 13:04:52 -- common/autotest_common.sh@10 -- # set +x 00:05:55.474 ************************************ 00:05:55.474 START TEST alias_rpc 00:05:55.474 ************************************ 00:05:55.474 13:04:52 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:55.474 * Looking for test storage... 00:05:55.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:55.474 13:04:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:55.474 13:04:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=73895 00:05:55.474 13:04:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 73895 00:05:55.474 13:04:52 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 73895 ']' 00:05:55.474 13:04:52 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.474 13:04:52 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:55.474 13:04:52 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.474 13:04:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.474 13:04:52 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:55.474 13:04:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.732 [2024-07-15 13:04:52.226852] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:55.732 [2024-07-15 13:04:52.226967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73895 ] 00:05:55.732 [2024-07-15 13:04:52.364751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.732 [2024-07-15 13:04:52.444378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.671 13:04:53 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:56.671 13:04:53 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:56.671 13:04:53 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:56.930 13:04:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 73895 00:05:56.930 13:04:53 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 73895 ']' 00:05:56.930 13:04:53 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 73895 00:05:56.930 13:04:53 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:56.930 13:04:53 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:56.930 13:04:53 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73895 00:05:56.930 13:04:53 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:56.930 13:04:53 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:56.930 killing process with pid 73895 00:05:56.930 13:04:53 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73895' 00:05:56.930 13:04:53 alias_rpc -- common/autotest_common.sh@965 -- # kill 73895 00:05:56.930 13:04:53 alias_rpc -- common/autotest_common.sh@970 -- # wait 73895 00:05:57.188 00:05:57.188 real 0m1.828s 00:05:57.188 user 0m2.056s 00:05:57.188 sys 0m0.458s 00:05:57.188 13:04:53 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.188 13:04:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.188 ************************************ 00:05:57.188 END TEST alias_rpc 00:05:57.188 ************************************ 00:05:57.446 13:04:53 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:05:57.446 13:04:53 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:57.446 13:04:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:57.446 13:04:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.446 13:04:53 -- common/autotest_common.sh@10 -- # set +x 00:05:57.446 ************************************ 00:05:57.446 START TEST dpdk_mem_utility 00:05:57.446 ************************************ 00:05:57.446 13:04:53 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:57.446 * Looking for test storage... 00:05:57.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:57.446 13:04:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:57.446 13:04:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=73987 00:05:57.446 13:04:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 73987 00:05:57.446 13:04:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.446 13:04:54 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 73987 ']' 00:05:57.446 13:04:54 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.446 13:04:54 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:57.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.447 13:04:54 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.447 13:04:54 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:57.447 13:04:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:57.447 [2024-07-15 13:04:54.115479] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:57.447 [2024-07-15 13:04:54.115630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73987 ] 00:05:57.705 [2024-07-15 13:04:54.258266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.705 [2024-07-15 13:04:54.356928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.642 13:04:55 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:58.643 13:04:55 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:05:58.643 13:04:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:58.643 13:04:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:58.643 13:04:55 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.643 13:04:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:58.643 { 00:05:58.643 "filename": "/tmp/spdk_mem_dump.txt" 00:05:58.643 } 00:05:58.643 13:04:55 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.643 13:04:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:58.643 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:58.643 1 heaps totaling size 814.000000 MiB 00:05:58.643 size: 814.000000 MiB heap id: 0 00:05:58.643 end heaps---------- 00:05:58.643 8 mempools totaling size 598.116089 MiB 00:05:58.643 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:58.643 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:58.643 size: 84.521057 MiB name: bdev_io_73987 00:05:58.643 size: 51.011292 MiB name: evtpool_73987 00:05:58.643 size: 50.003479 MiB name: msgpool_73987 00:05:58.643 size: 21.763794 MiB name: PDU_Pool 00:05:58.643 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:58.643 size: 0.026123 MiB name: Session_Pool 00:05:58.643 end mempools------- 00:05:58.643 6 memzones totaling size 4.142822 MiB 00:05:58.643 size: 1.000366 MiB name: RG_ring_0_73987 00:05:58.643 size: 1.000366 MiB name: RG_ring_1_73987 00:05:58.643 size: 1.000366 MiB name: RG_ring_4_73987 00:05:58.643 size: 1.000366 MiB name: RG_ring_5_73987 00:05:58.643 size: 0.125366 MiB name: RG_ring_2_73987 00:05:58.643 size: 0.015991 MiB name: RG_ring_3_73987 00:05:58.643 end memzones------- 00:05:58.643 13:04:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:58.643 heap id: 0 total size: 814.000000 MiB number of busy elements: 232 number of free elements: 15 00:05:58.643 list of free elements. size: 12.484375 MiB 00:05:58.643 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:58.643 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:58.643 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:58.643 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:58.643 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:58.643 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:58.643 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:58.643 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:58.643 element at address: 0x200000200000 with size: 0.836853 MiB 00:05:58.643 element at address: 0x20001aa00000 with size: 0.570618 MiB 00:05:58.643 element at address: 0x20000b200000 with size: 0.489441 MiB 00:05:58.643 element at address: 0x200000800000 with size: 0.486877 MiB 00:05:58.643 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:58.643 element at address: 0x200027e00000 with size: 0.398315 MiB 00:05:58.643 element at address: 0x200003a00000 with size: 0.351501 MiB 00:05:58.643 list of standard malloc elements. size: 199.253052 MiB 00:05:58.643 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:58.643 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:58.643 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:58.643 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:58.643 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:58.643 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:58.643 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:58.643 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:58.643 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:58.643 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:58.643 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:58.643 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:58.643 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:58.643 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:58.643 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:58.643 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:58.643 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:58.643 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:58.643 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:58.643 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:58.643 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:58.643 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:58.643 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:58.643 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:58.643 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:58.644 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:58.644 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e66040 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6cc40 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:58.644 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:58.644 list of memzone associated elements. size: 602.262573 MiB 00:05:58.644 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:58.644 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:58.644 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:58.644 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:58.644 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:58.644 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_73987_0 00:05:58.644 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:58.644 associated memzone info: size: 48.002930 MiB name: MP_evtpool_73987_0 00:05:58.644 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:58.644 associated memzone info: size: 48.002930 MiB name: MP_msgpool_73987_0 00:05:58.644 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:58.644 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:58.644 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:58.644 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:58.644 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:58.644 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_73987 00:05:58.644 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:58.644 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_73987 00:05:58.645 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:58.645 associated memzone info: size: 1.007996 MiB name: MP_evtpool_73987 00:05:58.645 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:58.645 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:58.645 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:58.645 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:58.645 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:58.645 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:58.645 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:58.645 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:58.645 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:58.645 associated memzone info: size: 1.000366 MiB name: RG_ring_0_73987 00:05:58.645 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:58.645 associated memzone info: size: 1.000366 MiB name: RG_ring_1_73987 00:05:58.645 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:58.645 associated memzone info: size: 1.000366 MiB name: RG_ring_4_73987 00:05:58.645 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:58.645 associated memzone info: size: 1.000366 MiB name: RG_ring_5_73987 00:05:58.645 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:58.645 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_73987 00:05:58.645 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:58.645 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:58.645 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:58.645 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:58.645 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:58.645 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:58.645 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:58.645 associated memzone info: size: 0.125366 MiB name: RG_ring_2_73987 00:05:58.645 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:58.645 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:58.645 element at address: 0x200027e66100 with size: 0.023743 MiB 00:05:58.645 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:58.645 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:58.645 associated memzone info: size: 0.015991 MiB name: RG_ring_3_73987 00:05:58.645 element at address: 0x200027e6c240 with size: 0.002441 MiB 00:05:58.645 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:58.645 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:58.645 associated memzone info: size: 0.000183 MiB name: MP_msgpool_73987 00:05:58.645 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:58.645 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_73987 00:05:58.645 element at address: 0x200027e6cd00 with size: 0.000305 MiB 00:05:58.645 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:58.645 13:04:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:58.645 13:04:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 73987 00:05:58.645 13:04:55 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 73987 ']' 00:05:58.645 13:04:55 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 73987 00:05:58.645 13:04:55 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:05:58.645 13:04:55 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:58.645 13:04:55 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73987 00:05:58.645 killing process with pid 73987 00:05:58.645 13:04:55 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:58.645 13:04:55 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:58.645 13:04:55 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73987' 00:05:58.645 13:04:55 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 73987 00:05:58.645 13:04:55 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 73987 00:05:59.220 00:05:59.220 real 0m1.704s 00:05:59.220 user 0m1.836s 00:05:59.220 sys 0m0.455s 00:05:59.220 13:04:55 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.220 13:04:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:59.220 ************************************ 00:05:59.220 END TEST dpdk_mem_utility 00:05:59.220 ************************************ 00:05:59.220 13:04:55 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:59.220 13:04:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:59.220 13:04:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.220 13:04:55 -- common/autotest_common.sh@10 -- # set +x 00:05:59.220 ************************************ 00:05:59.220 START TEST event 00:05:59.220 ************************************ 00:05:59.220 13:04:55 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:59.220 * Looking for test storage... 00:05:59.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:59.220 13:04:55 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:59.220 13:04:55 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:59.220 13:04:55 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:59.220 13:04:55 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:59.220 13:04:55 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.220 13:04:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.220 ************************************ 00:05:59.220 START TEST event_perf 00:05:59.220 ************************************ 00:05:59.220 13:04:55 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:59.220 Running I/O for 1 seconds...[2024-07-15 13:04:55.828496] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:59.220 [2024-07-15 13:04:55.828632] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74082 ] 00:05:59.477 [2024-07-15 13:04:55.960321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:59.477 [2024-07-15 13:04:56.057969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.477 [2024-07-15 13:04:56.058169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.477 Running I/O for 1 seconds...[2024-07-15 13:04:56.058098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.477 [2024-07-15 13:04:56.058161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.411 00:06:00.411 lcore 0: 183126 00:06:00.411 lcore 1: 183124 00:06:00.411 lcore 2: 183124 00:06:00.411 lcore 3: 183125 00:06:00.411 done. 00:06:00.411 00:06:00.411 real 0m1.317s 00:06:00.411 user 0m4.132s 00:06:00.411 sys 0m0.064s 00:06:00.411 13:04:57 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.411 13:04:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:00.411 ************************************ 00:06:00.411 END TEST event_perf 00:06:00.411 ************************************ 00:06:00.670 13:04:57 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:00.670 13:04:57 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:00.670 13:04:57 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.670 13:04:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.670 ************************************ 00:06:00.670 START TEST event_reactor 00:06:00.670 ************************************ 00:06:00.670 13:04:57 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:00.670 [2024-07-15 13:04:57.200657] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:00.670 [2024-07-15 13:04:57.200763] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74115 ] 00:06:00.670 [2024-07-15 13:04:57.335222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.929 [2024-07-15 13:04:57.414651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.864 test_start 00:06:01.864 oneshot 00:06:01.864 tick 100 00:06:01.864 tick 100 00:06:01.864 tick 250 00:06:01.864 tick 100 00:06:01.864 tick 100 00:06:01.864 tick 100 00:06:01.864 tick 250 00:06:01.864 tick 500 00:06:01.864 tick 100 00:06:01.864 tick 100 00:06:01.864 tick 250 00:06:01.864 tick 100 00:06:01.864 tick 100 00:06:01.864 test_end 00:06:01.864 00:06:01.864 real 0m1.336s 00:06:01.864 user 0m1.173s 00:06:01.864 sys 0m0.055s 00:06:01.864 13:04:58 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.864 13:04:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:01.864 ************************************ 00:06:01.864 END TEST event_reactor 00:06:01.864 ************************************ 00:06:01.864 13:04:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:01.864 13:04:58 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:01.864 13:04:58 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.864 13:04:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.864 ************************************ 00:06:01.864 START TEST event_reactor_perf 00:06:01.864 ************************************ 00:06:01.864 13:04:58 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:01.864 [2024-07-15 13:04:58.584931] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:01.864 [2024-07-15 13:04:58.585041] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74151 ] 00:06:02.122 [2024-07-15 13:04:58.718751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.122 [2024-07-15 13:04:58.823366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.497 test_start 00:06:03.497 test_end 00:06:03.497 Performance: 370693 events per second 00:06:03.497 00:06:03.497 real 0m1.331s 00:06:03.497 user 0m1.168s 00:06:03.497 sys 0m0.056s 00:06:03.497 13:04:59 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.497 13:04:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.497 ************************************ 00:06:03.497 END TEST event_reactor_perf 00:06:03.497 ************************************ 00:06:03.497 13:04:59 event -- event/event.sh@49 -- # uname -s 00:06:03.497 13:04:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:03.497 13:04:59 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:03.497 13:04:59 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:03.497 13:04:59 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.497 13:04:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.497 ************************************ 00:06:03.497 START TEST event_scheduler 00:06:03.497 ************************************ 00:06:03.497 13:04:59 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:03.497 * Looking for test storage... 00:06:03.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:03.497 13:05:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:03.497 13:05:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=74212 00:06:03.497 13:05:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.497 13:05:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:03.497 13:05:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 74212 00:06:03.497 13:05:00 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 74212 ']' 00:06:03.497 13:05:00 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.497 13:05:00 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:03.497 13:05:00 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.497 13:05:00 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:03.497 13:05:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:03.497 [2024-07-15 13:05:00.089476] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:03.497 [2024-07-15 13:05:00.089581] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74212 ] 00:06:03.497 [2024-07-15 13:05:00.226790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.757 [2024-07-15 13:05:00.307108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.757 [2024-07-15 13:05:00.307247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.757 [2024-07-15 13:05:00.307334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.757 [2024-07-15 13:05:00.307335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.693 13:05:01 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:04.693 13:05:01 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:04.693 13:05:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:04.693 13:05:01 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.693 13:05:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.693 POWER: Env isn't set yet! 00:06:04.693 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:04.693 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:04.693 POWER: Cannot set governor of lcore 0 to userspace 00:06:04.693 POWER: Attempting to initialise PSTAT power management... 00:06:04.693 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:04.694 POWER: Cannot set governor of lcore 0 to performance 00:06:04.694 POWER: Attempting to initialise CPPC power management... 00:06:04.694 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:04.694 POWER: Cannot set governor of lcore 0 to userspace 00:06:04.694 POWER: Attempting to initialise VM power management... 00:06:04.694 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:04.694 POWER: Unable to set Power Management Environment for lcore 0 00:06:04.694 [2024-07-15 13:05:01.137039] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:04.694 [2024-07-15 13:05:01.137053] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:04.694 [2024-07-15 13:05:01.137062] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:04.694 [2024-07-15 13:05:01.137074] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:04.694 [2024-07-15 13:05:01.137082] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:04.694 [2024-07-15 13:05:01.137089] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:04.694 13:05:01 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.694 13:05:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:04.694 13:05:01 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.694 13:05:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.694 [2024-07-15 13:05:01.232570] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:04.694 13:05:01 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.694 13:05:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:04.694 13:05:01 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.694 13:05:01 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.694 13:05:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.694 ************************************ 00:06:04.694 START TEST scheduler_create_thread 00:06:04.694 ************************************ 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.694 2 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.694 3 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.694 4 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.694 5 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.694 6 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.694 7 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.694 8 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.694 9 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.694 10 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.694 13:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.071 13:05:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.071 13:05:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:06.071 13:05:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:06.071 13:05:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.071 13:05:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.442 ************************************ 00:06:07.442 END TEST scheduler_create_thread 00:06:07.442 ************************************ 00:06:07.442 13:05:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.442 00:06:07.442 real 0m2.617s 00:06:07.442 user 0m0.018s 00:06:07.442 sys 0m0.008s 00:06:07.443 13:05:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.443 13:05:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.443 13:05:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:07.443 13:05:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 74212 00:06:07.443 13:05:03 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 74212 ']' 00:06:07.443 13:05:03 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 74212 00:06:07.443 13:05:03 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:07.443 13:05:03 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:07.443 13:05:03 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74212 00:06:07.443 killing process with pid 74212 00:06:07.443 13:05:03 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:07.443 13:05:03 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:07.443 13:05:03 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74212' 00:06:07.443 13:05:03 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 74212 00:06:07.443 13:05:03 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 74212 00:06:07.700 [2024-07-15 13:05:04.340940] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:07.958 00:06:07.958 real 0m4.590s 00:06:07.958 user 0m8.959s 00:06:07.958 sys 0m0.379s 00:06:07.958 13:05:04 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.958 13:05:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.958 ************************************ 00:06:07.958 END TEST event_scheduler 00:06:07.958 ************************************ 00:06:07.958 13:05:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:07.958 13:05:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:07.958 13:05:04 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.958 13:05:04 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.958 13:05:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.958 ************************************ 00:06:07.958 START TEST app_repeat 00:06:07.958 ************************************ 00:06:07.958 13:05:04 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:07.958 13:05:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.958 13:05:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.958 13:05:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:07.958 13:05:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.958 13:05:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:07.958 13:05:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:07.958 13:05:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:07.958 13:05:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=74330 00:06:07.958 13:05:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.958 Process app_repeat pid: 74330 00:06:07.958 13:05:04 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:07.958 13:05:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 74330' 00:06:07.958 13:05:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:07.958 spdk_app_start Round 0 00:06:07.958 13:05:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:07.958 13:05:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74330 /var/tmp/spdk-nbd.sock 00:06:07.958 13:05:04 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 74330 ']' 00:06:07.958 13:05:04 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.958 13:05:04 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.958 13:05:04 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.958 13:05:04 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.958 13:05:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.958 [2024-07-15 13:05:04.639671] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:07.958 [2024-07-15 13:05:04.639766] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74330 ] 00:06:08.216 [2024-07-15 13:05:04.775567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.216 [2024-07-15 13:05:04.849595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.216 [2024-07-15 13:05:04.849603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.474 13:05:04 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:08.474 13:05:04 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:08.474 13:05:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.732 Malloc0 00:06:08.732 13:05:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.989 Malloc1 00:06:08.989 13:05:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.989 13:05:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.989 13:05:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.989 13:05:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.989 13:05:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.989 13:05:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.989 13:05:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.989 13:05:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.989 13:05:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.989 13:05:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.989 13:05:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.989 13:05:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.989 13:05:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:08.989 13:05:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.989 13:05:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.989 13:05:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:09.248 /dev/nbd0 00:06:09.248 13:05:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:09.248 13:05:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:09.248 13:05:05 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:09.248 13:05:05 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:09.248 13:05:05 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:09.248 13:05:05 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:09.248 13:05:05 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:09.248 13:05:05 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:09.248 13:05:05 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:09.248 13:05:05 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:09.248 13:05:05 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.248 1+0 records in 00:06:09.248 1+0 records out 00:06:09.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409084 s, 10.0 MB/s 00:06:09.248 13:05:05 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.248 13:05:05 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:09.248 13:05:05 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.248 13:05:05 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:09.248 13:05:05 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:09.248 13:05:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.248 13:05:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.248 13:05:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.504 /dev/nbd1 00:06:09.504 13:05:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.504 13:05:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.504 13:05:06 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:09.504 13:05:06 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:09.504 13:05:06 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:09.504 13:05:06 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:09.504 13:05:06 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:09.504 13:05:06 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:09.504 13:05:06 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:09.504 13:05:06 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:09.504 13:05:06 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.504 1+0 records in 00:06:09.504 1+0 records out 00:06:09.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309749 s, 13.2 MB/s 00:06:09.504 13:05:06 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.504 13:05:06 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:09.504 13:05:06 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.504 13:05:06 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:09.504 13:05:06 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:09.504 13:05:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.504 13:05:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.504 13:05:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.504 13:05:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.504 13:05:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.761 { 00:06:09.761 "bdev_name": "Malloc0", 00:06:09.761 "nbd_device": "/dev/nbd0" 00:06:09.761 }, 00:06:09.761 { 00:06:09.761 "bdev_name": "Malloc1", 00:06:09.761 "nbd_device": "/dev/nbd1" 00:06:09.761 } 00:06:09.761 ]' 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.761 { 00:06:09.761 "bdev_name": "Malloc0", 00:06:09.761 "nbd_device": "/dev/nbd0" 00:06:09.761 }, 00:06:09.761 { 00:06:09.761 "bdev_name": "Malloc1", 00:06:09.761 "nbd_device": "/dev/nbd1" 00:06:09.761 } 00:06:09.761 ]' 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.761 /dev/nbd1' 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.761 /dev/nbd1' 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.761 256+0 records in 00:06:09.761 256+0 records out 00:06:09.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00891791 s, 118 MB/s 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.761 256+0 records in 00:06:09.761 256+0 records out 00:06:09.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241253 s, 43.5 MB/s 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.761 256+0 records in 00:06:09.761 256+0 records out 00:06:09.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026834 s, 39.1 MB/s 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.761 13:05:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:10.018 13:05:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.018 13:05:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:10.018 13:05:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.018 13:05:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:10.018 13:05:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.018 13:05:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.018 13:05:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.018 13:05:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:10.018 13:05:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.018 13:05:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.274 13:05:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.274 13:05:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.274 13:05:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.274 13:05:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.274 13:05:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.275 13:05:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.275 13:05:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.275 13:05:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.275 13:05:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.275 13:05:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.530 13:05:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.531 13:05:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.531 13:05:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.531 13:05:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.531 13:05:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.531 13:05:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.531 13:05:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.531 13:05:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.531 13:05:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.531 13:05:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.531 13:05:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.843 13:05:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.843 13:05:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.843 13:05:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.843 13:05:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.843 13:05:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.843 13:05:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.843 13:05:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:10.843 13:05:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.843 13:05:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.843 13:05:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.843 13:05:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.843 13:05:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.843 13:05:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:11.100 13:05:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:11.356 [2024-07-15 13:05:07.853527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.356 [2024-07-15 13:05:07.948542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.356 [2024-07-15 13:05:07.948557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.356 [2024-07-15 13:05:08.005713] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:11.356 [2024-07-15 13:05:08.005754] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.970 spdk_app_start Round 1 00:06:13.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.970 13:05:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:13.970 13:05:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:13.970 13:05:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74330 /var/tmp/spdk-nbd.sock 00:06:13.970 13:05:10 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 74330 ']' 00:06:13.970 13:05:10 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.970 13:05:10 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:13.970 13:05:10 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.970 13:05:10 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:13.970 13:05:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.242 13:05:10 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:14.242 13:05:10 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:14.242 13:05:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.521 Malloc0 00:06:14.521 13:05:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.779 Malloc1 00:06:14.779 13:05:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.779 13:05:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.779 13:05:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.779 13:05:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.779 13:05:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.779 13:05:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.779 13:05:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.779 13:05:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.779 13:05:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.779 13:05:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.779 13:05:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.779 13:05:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.779 13:05:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:14.779 13:05:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.779 13:05:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.779 13:05:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.036 /dev/nbd0 00:06:15.036 13:05:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.036 13:05:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.036 13:05:11 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:15.036 13:05:11 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:15.036 13:05:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:15.036 13:05:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:15.036 13:05:11 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:15.036 13:05:11 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:15.036 13:05:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:15.036 13:05:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:15.036 13:05:11 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.036 1+0 records in 00:06:15.036 1+0 records out 00:06:15.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367822 s, 11.1 MB/s 00:06:15.037 13:05:11 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.037 13:05:11 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:15.037 13:05:11 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.037 13:05:11 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:15.037 13:05:11 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:15.037 13:05:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.037 13:05:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.037 13:05:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.295 /dev/nbd1 00:06:15.295 13:05:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.295 13:05:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.295 13:05:11 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:15.295 13:05:11 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:15.295 13:05:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:15.295 13:05:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:15.295 13:05:11 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:15.295 13:05:11 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:15.295 13:05:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:15.295 13:05:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:15.295 13:05:11 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.295 1+0 records in 00:06:15.295 1+0 records out 00:06:15.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290408 s, 14.1 MB/s 00:06:15.295 13:05:11 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.295 13:05:11 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:15.295 13:05:11 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.295 13:05:11 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:15.295 13:05:11 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:15.295 13:05:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.295 13:05:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.295 13:05:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.295 13:05:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.295 13:05:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.553 13:05:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:15.553 { 00:06:15.553 "bdev_name": "Malloc0", 00:06:15.553 "nbd_device": "/dev/nbd0" 00:06:15.553 }, 00:06:15.553 { 00:06:15.553 "bdev_name": "Malloc1", 00:06:15.553 "nbd_device": "/dev/nbd1" 00:06:15.553 } 00:06:15.553 ]' 00:06:15.553 13:05:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:15.553 { 00:06:15.553 "bdev_name": "Malloc0", 00:06:15.553 "nbd_device": "/dev/nbd0" 00:06:15.553 }, 00:06:15.553 { 00:06:15.553 "bdev_name": "Malloc1", 00:06:15.553 "nbd_device": "/dev/nbd1" 00:06:15.553 } 00:06:15.553 ]' 00:06:15.553 13:05:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:15.811 /dev/nbd1' 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:15.811 /dev/nbd1' 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:15.811 256+0 records in 00:06:15.811 256+0 records out 00:06:15.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00691934 s, 152 MB/s 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:15.811 256+0 records in 00:06:15.811 256+0 records out 00:06:15.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250458 s, 41.9 MB/s 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:15.811 256+0 records in 00:06:15.811 256+0 records out 00:06:15.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301237 s, 34.8 MB/s 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.811 13:05:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.067 13:05:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.067 13:05:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.067 13:05:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.067 13:05:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.067 13:05:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.067 13:05:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.067 13:05:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.067 13:05:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.067 13:05:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.067 13:05:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.324 13:05:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.324 13:05:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.324 13:05:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.324 13:05:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.324 13:05:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.324 13:05:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.324 13:05:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.324 13:05:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.324 13:05:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.324 13:05:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.324 13:05:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.582 13:05:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.582 13:05:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.582 13:05:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.841 13:05:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.841 13:05:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.841 13:05:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.841 13:05:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:16.841 13:05:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.841 13:05:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.841 13:05:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.841 13:05:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.841 13:05:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.841 13:05:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.099 13:05:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:17.099 [2024-07-15 13:05:13.807945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.356 [2024-07-15 13:05:13.879636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.356 [2024-07-15 13:05:13.879663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.356 [2024-07-15 13:05:13.936287] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:17.356 [2024-07-15 13:05:13.936368] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.634 13:05:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.634 spdk_app_start Round 2 00:06:20.634 13:05:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:20.634 13:05:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74330 /var/tmp/spdk-nbd.sock 00:06:20.634 13:05:16 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 74330 ']' 00:06:20.634 13:05:16 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.634 13:05:16 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:20.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.634 13:05:16 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.634 13:05:16 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:20.634 13:05:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.634 13:05:16 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:20.634 13:05:16 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:20.634 13:05:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.634 Malloc0 00:06:20.634 13:05:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.895 Malloc1 00:06:20.895 13:05:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.895 13:05:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.895 13:05:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.895 13:05:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.895 13:05:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.895 13:05:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.895 13:05:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.895 13:05:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.895 13:05:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.895 13:05:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.895 13:05:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.895 13:05:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.895 13:05:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:20.895 13:05:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.895 13:05:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.895 13:05:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.153 /dev/nbd0 00:06:21.153 13:05:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.153 13:05:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.153 13:05:17 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:21.153 13:05:17 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:21.153 13:05:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:21.153 13:05:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:21.153 13:05:17 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:21.153 13:05:17 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:21.153 13:05:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:21.153 13:05:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:21.153 13:05:17 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.153 1+0 records in 00:06:21.153 1+0 records out 00:06:21.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404767 s, 10.1 MB/s 00:06:21.153 13:05:17 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.153 13:05:17 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:21.153 13:05:17 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.153 13:05:17 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:21.153 13:05:17 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:21.153 13:05:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.153 13:05:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.153 13:05:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.412 /dev/nbd1 00:06:21.412 13:05:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.412 13:05:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.412 13:05:17 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:21.412 13:05:17 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:21.412 13:05:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:21.412 13:05:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:21.412 13:05:17 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:21.412 13:05:17 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:21.412 13:05:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:21.412 13:05:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:21.412 13:05:17 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.412 1+0 records in 00:06:21.412 1+0 records out 00:06:21.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254884 s, 16.1 MB/s 00:06:21.412 13:05:17 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.412 13:05:17 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:21.412 13:05:17 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.412 13:05:17 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:21.412 13:05:17 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:21.412 13:05:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.412 13:05:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.412 13:05:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.412 13:05:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.412 13:05:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.671 13:05:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.671 { 00:06:21.671 "bdev_name": "Malloc0", 00:06:21.671 "nbd_device": "/dev/nbd0" 00:06:21.671 }, 00:06:21.671 { 00:06:21.671 "bdev_name": "Malloc1", 00:06:21.671 "nbd_device": "/dev/nbd1" 00:06:21.671 } 00:06:21.671 ]' 00:06:21.671 13:05:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.671 { 00:06:21.671 "bdev_name": "Malloc0", 00:06:21.671 "nbd_device": "/dev/nbd0" 00:06:21.671 }, 00:06:21.671 { 00:06:21.671 "bdev_name": "Malloc1", 00:06:21.671 "nbd_device": "/dev/nbd1" 00:06:21.671 } 00:06:21.671 ]' 00:06:21.671 13:05:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.671 13:05:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.671 /dev/nbd1' 00:06:21.671 13:05:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.671 /dev/nbd1' 00:06:21.671 13:05:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.671 13:05:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.671 13:05:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.671 13:05:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.671 13:05:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.671 13:05:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.671 13:05:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.671 13:05:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.671 13:05:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.671 13:05:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.671 13:05:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.671 13:05:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.671 256+0 records in 00:06:21.671 256+0 records out 00:06:21.671 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00736752 s, 142 MB/s 00:06:21.671 13:05:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.672 256+0 records in 00:06:21.672 256+0 records out 00:06:21.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243672 s, 43.0 MB/s 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.672 256+0 records in 00:06:21.672 256+0 records out 00:06:21.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0333417 s, 31.4 MB/s 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.672 13:05:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.238 13:05:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.238 13:05:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.238 13:05:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.238 13:05:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.238 13:05:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.238 13:05:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.238 13:05:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.238 13:05:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.238 13:05:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.238 13:05:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:22.496 13:05:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:22.496 13:05:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:22.496 13:05:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:22.496 13:05:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.496 13:05:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.496 13:05:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:22.496 13:05:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.496 13:05:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.496 13:05:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.496 13:05:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.496 13:05:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.774 13:05:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.774 13:05:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.774 13:05:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.774 13:05:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.774 13:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.774 13:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.774 13:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:22.774 13:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.774 13:05:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.774 13:05:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.774 13:05:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.774 13:05:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.774 13:05:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:23.058 13:05:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:23.058 [2024-07-15 13:05:19.783913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.316 [2024-07-15 13:05:19.845166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.316 [2024-07-15 13:05:19.845171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.316 [2024-07-15 13:05:19.900683] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.316 [2024-07-15 13:05:19.900763] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.846 13:05:22 event.app_repeat -- event/event.sh@38 -- # waitforlisten 74330 /var/tmp/spdk-nbd.sock 00:06:25.846 13:05:22 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 74330 ']' 00:06:25.846 13:05:22 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.846 13:05:22 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.846 13:05:22 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.846 13:05:22 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.846 13:05:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.414 13:05:22 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.414 13:05:22 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:26.414 13:05:22 event.app_repeat -- event/event.sh@39 -- # killprocess 74330 00:06:26.414 13:05:22 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 74330 ']' 00:06:26.414 13:05:22 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 74330 00:06:26.414 13:05:22 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:26.414 13:05:22 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:26.414 13:05:22 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74330 00:06:26.414 13:05:22 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:26.414 killing process with pid 74330 00:06:26.414 13:05:22 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:26.414 13:05:22 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74330' 00:06:26.414 13:05:22 event.app_repeat -- common/autotest_common.sh@965 -- # kill 74330 00:06:26.414 13:05:22 event.app_repeat -- common/autotest_common.sh@970 -- # wait 74330 00:06:26.414 spdk_app_start is called in Round 0. 00:06:26.414 Shutdown signal received, stop current app iteration 00:06:26.414 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:26.414 spdk_app_start is called in Round 1. 00:06:26.414 Shutdown signal received, stop current app iteration 00:06:26.414 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:26.414 spdk_app_start is called in Round 2. 00:06:26.414 Shutdown signal received, stop current app iteration 00:06:26.414 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:26.414 spdk_app_start is called in Round 3. 00:06:26.414 Shutdown signal received, stop current app iteration 00:06:26.414 13:05:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:26.414 13:05:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:26.414 00:06:26.414 real 0m18.526s 00:06:26.414 user 0m41.498s 00:06:26.414 sys 0m3.139s 00:06:26.414 13:05:23 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.414 13:05:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.414 ************************************ 00:06:26.414 END TEST app_repeat 00:06:26.414 ************************************ 00:06:26.674 13:05:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:26.674 13:05:23 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:26.674 13:05:23 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:26.674 13:05:23 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.674 13:05:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.674 ************************************ 00:06:26.674 START TEST cpu_locks 00:06:26.674 ************************************ 00:06:26.674 13:05:23 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:26.674 * Looking for test storage... 00:06:26.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:26.674 13:05:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:26.674 13:05:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:26.674 13:05:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:26.674 13:05:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:26.674 13:05:23 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:26.674 13:05:23 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.674 13:05:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.674 ************************************ 00:06:26.674 START TEST default_locks 00:06:26.674 ************************************ 00:06:26.674 13:05:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:26.674 13:05:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=74943 00:06:26.674 13:05:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.674 13:05:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 74943 00:06:26.674 13:05:23 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 74943 ']' 00:06:26.674 13:05:23 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.674 13:05:23 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:26.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.674 13:05:23 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.674 13:05:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:26.674 13:05:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.674 [2024-07-15 13:05:23.337149] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:26.674 [2024-07-15 13:05:23.337286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74943 ] 00:06:26.930 [2024-07-15 13:05:23.465284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.930 [2024-07-15 13:05:23.557586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.865 13:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:27.865 13:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:27.865 13:05:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 74943 00:06:27.865 13:05:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 74943 00:06:27.865 13:05:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.123 13:05:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 74943 00:06:28.123 13:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 74943 ']' 00:06:28.123 13:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 74943 00:06:28.123 13:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:28.123 13:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:28.123 13:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74943 00:06:28.123 13:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:28.123 13:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:28.123 killing process with pid 74943 00:06:28.123 13:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74943' 00:06:28.123 13:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 74943 00:06:28.123 13:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 74943 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 74943 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 74943 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 74943 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 74943 ']' 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.691 ERROR: process (pid: 74943) is no longer running 00:06:28.691 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (74943) - No such process 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:28.691 00:06:28.691 real 0m1.893s 00:06:28.691 user 0m1.995s 00:06:28.691 sys 0m0.582s 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.691 13:05:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.691 ************************************ 00:06:28.691 END TEST default_locks 00:06:28.691 ************************************ 00:06:28.691 13:05:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:28.691 13:05:25 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:28.691 13:05:25 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.691 13:05:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.691 ************************************ 00:06:28.691 START TEST default_locks_via_rpc 00:06:28.691 ************************************ 00:06:28.691 13:05:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:28.691 13:05:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.691 13:05:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=75007 00:06:28.691 13:05:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 75007 00:06:28.691 13:05:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75007 ']' 00:06:28.691 13:05:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.691 13:05:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.691 13:05:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.691 13:05:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.691 13:05:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.691 [2024-07-15 13:05:25.292376] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:28.691 [2024-07-15 13:05:25.292539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75007 ] 00:06:28.691 [2024-07-15 13:05:25.423762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.951 [2024-07-15 13:05:25.513565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.886 13:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.886 13:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:29.886 13:05:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:29.886 13:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.886 13:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.886 13:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.886 13:05:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:29.886 13:05:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.886 13:05:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.886 13:05:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.886 13:05:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.886 13:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.886 13:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.886 13:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.886 13:05:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 75007 00:06:29.886 13:05:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 75007 00:06:29.886 13:05:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.145 13:05:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 75007 00:06:30.145 13:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 75007 ']' 00:06:30.145 13:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 75007 00:06:30.145 13:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:30.145 13:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:30.145 13:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75007 00:06:30.145 13:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:30.145 killing process with pid 75007 00:06:30.145 13:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:30.145 13:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75007' 00:06:30.145 13:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 75007 00:06:30.145 13:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 75007 00:06:30.713 00:06:30.713 real 0m2.046s 00:06:30.713 user 0m2.260s 00:06:30.713 sys 0m0.622s 00:06:30.713 13:05:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.713 13:05:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.713 ************************************ 00:06:30.713 END TEST default_locks_via_rpc 00:06:30.713 ************************************ 00:06:30.713 13:05:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:30.713 13:05:27 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:30.713 13:05:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.713 13:05:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.713 ************************************ 00:06:30.713 START TEST non_locking_app_on_locked_coremask 00:06:30.713 ************************************ 00:06:30.713 13:05:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:30.713 13:05:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=75077 00:06:30.713 13:05:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.713 13:05:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 75077 /var/tmp/spdk.sock 00:06:30.713 13:05:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75077 ']' 00:06:30.713 13:05:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.713 13:05:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.713 13:05:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.713 13:05:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.713 13:05:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.713 [2024-07-15 13:05:27.399954] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:30.714 [2024-07-15 13:05:27.400113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75077 ] 00:06:30.972 [2024-07-15 13:05:27.540471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.972 [2024-07-15 13:05:27.629619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.913 13:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:31.913 13:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:31.913 13:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=75105 00:06:31.913 13:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 75105 /var/tmp/spdk2.sock 00:06:31.913 13:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:31.913 13:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75105 ']' 00:06:31.913 13:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.913 13:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:31.913 13:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.913 13:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:31.913 13:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.913 [2024-07-15 13:05:28.427354] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:31.913 [2024-07-15 13:05:28.427491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75105 ] 00:06:31.913 [2024-07-15 13:05:28.569200] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.913 [2024-07-15 13:05:28.572301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.172 [2024-07-15 13:05:28.769687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.740 13:05:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.740 13:05:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:32.740 13:05:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 75077 00:06:32.740 13:05:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75077 00:06:32.740 13:05:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.676 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 75077 00:06:33.677 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75077 ']' 00:06:33.677 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 75077 00:06:33.677 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:33.677 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:33.677 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75077 00:06:33.677 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:33.677 killing process with pid 75077 00:06:33.677 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:33.677 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75077' 00:06:33.677 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 75077 00:06:33.677 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 75077 00:06:34.244 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 75105 00:06:34.244 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75105 ']' 00:06:34.244 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 75105 00:06:34.244 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:34.244 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:34.244 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75105 00:06:34.244 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:34.244 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:34.244 killing process with pid 75105 00:06:34.244 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75105' 00:06:34.244 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 75105 00:06:34.244 13:05:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 75105 00:06:34.824 00:06:34.824 real 0m3.959s 00:06:34.824 user 0m4.352s 00:06:34.824 sys 0m1.109s 00:06:34.824 ************************************ 00:06:34.824 END TEST non_locking_app_on_locked_coremask 00:06:34.824 ************************************ 00:06:34.824 13:05:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.824 13:05:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.824 13:05:31 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:34.824 13:05:31 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:34.824 13:05:31 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.824 13:05:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.824 ************************************ 00:06:34.824 START TEST locking_app_on_unlocked_coremask 00:06:34.824 ************************************ 00:06:34.824 13:05:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:34.824 13:05:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=75184 00:06:34.824 13:05:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 75184 /var/tmp/spdk.sock 00:06:34.824 13:05:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75184 ']' 00:06:34.824 13:05:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.824 13:05:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:34.824 13:05:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.824 13:05:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:34.824 13:05:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.824 13:05:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:34.824 [2024-07-15 13:05:31.411044] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:34.824 [2024-07-15 13:05:31.411157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75184 ] 00:06:34.824 [2024-07-15 13:05:31.550703] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:34.824 [2024-07-15 13:05:31.550768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.087 [2024-07-15 13:05:31.649146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.085 13:05:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:36.085 13:05:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:36.085 13:05:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=75212 00:06:36.085 13:05:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 75212 /var/tmp/spdk2.sock 00:06:36.085 13:05:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.085 13:05:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75212 ']' 00:06:36.085 13:05:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.085 13:05:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:36.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.085 13:05:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.085 13:05:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:36.085 13:05:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.085 [2024-07-15 13:05:32.517224] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:36.085 [2024-07-15 13:05:32.517327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75212 ] 00:06:36.085 [2024-07-15 13:05:32.663605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.343 [2024-07-15 13:05:32.839598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.910 13:05:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:36.910 13:05:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:36.910 13:05:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 75212 00:06:36.910 13:05:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75212 00:06:36.910 13:05:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.845 13:05:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 75184 00:06:37.845 13:05:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75184 ']' 00:06:37.845 13:05:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 75184 00:06:37.845 13:05:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:37.845 13:05:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:37.845 13:05:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75184 00:06:37.845 13:05:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:37.845 13:05:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:37.845 killing process with pid 75184 00:06:37.845 13:05:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75184' 00:06:37.845 13:05:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 75184 00:06:37.845 13:05:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 75184 00:06:38.411 13:05:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 75212 00:06:38.411 13:05:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75212 ']' 00:06:38.411 13:05:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 75212 00:06:38.411 13:05:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:38.411 13:05:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:38.411 13:05:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75212 00:06:38.411 13:05:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:38.411 killing process with pid 75212 00:06:38.411 13:05:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:38.411 13:05:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75212' 00:06:38.411 13:05:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 75212 00:06:38.411 13:05:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 75212 00:06:38.976 00:06:38.976 real 0m4.079s 00:06:38.976 user 0m4.521s 00:06:38.976 sys 0m1.132s 00:06:38.976 ************************************ 00:06:38.976 END TEST locking_app_on_unlocked_coremask 00:06:38.976 ************************************ 00:06:38.976 13:05:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.976 13:05:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.976 13:05:35 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:38.976 13:05:35 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:38.976 13:05:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.976 13:05:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.976 ************************************ 00:06:38.976 START TEST locking_app_on_locked_coremask 00:06:38.976 ************************************ 00:06:38.976 13:05:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:38.976 13:05:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=75291 00:06:38.976 13:05:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 75291 /var/tmp/spdk.sock 00:06:38.976 13:05:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75291 ']' 00:06:38.976 13:05:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.976 13:05:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.977 13:05:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.977 13:05:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.977 13:05:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.977 13:05:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.977 [2024-07-15 13:05:35.535681] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:38.977 [2024-07-15 13:05:35.535779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75291 ] 00:06:38.977 [2024-07-15 13:05:35.671101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.234 [2024-07-15 13:05:35.766854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.798 13:05:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.798 13:05:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:39.798 13:05:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=75321 00:06:39.798 13:05:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 75321 /var/tmp/spdk2.sock 00:06:39.798 13:05:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:39.798 13:05:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:39.798 13:05:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75321 /var/tmp/spdk2.sock 00:06:39.798 13:05:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:39.798 13:05:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.798 13:05:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:39.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.798 13:05:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.798 13:05:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 75321 /var/tmp/spdk2.sock 00:06:39.798 13:05:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75321 ']' 00:06:39.798 13:05:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.798 13:05:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.798 13:05:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.798 13:05:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.798 13:05:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.055 [2024-07-15 13:05:36.546096] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:40.055 [2024-07-15 13:05:36.546200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75321 ] 00:06:40.055 [2024-07-15 13:05:36.693446] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 75291 has claimed it. 00:06:40.055 [2024-07-15 13:05:36.693528] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:40.620 ERROR: process (pid: 75321) is no longer running 00:06:40.620 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (75321) - No such process 00:06:40.620 13:05:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:40.620 13:05:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:40.620 13:05:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:40.620 13:05:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.620 13:05:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:40.620 13:05:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.620 13:05:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 75291 00:06:40.620 13:05:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75291 00:06:40.620 13:05:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.183 13:05:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 75291 00:06:41.183 13:05:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75291 ']' 00:06:41.183 13:05:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 75291 00:06:41.183 13:05:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:41.183 13:05:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:41.183 13:05:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75291 00:06:41.183 killing process with pid 75291 00:06:41.183 13:05:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:41.183 13:05:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:41.183 13:05:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75291' 00:06:41.183 13:05:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 75291 00:06:41.183 13:05:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 75291 00:06:41.440 00:06:41.440 real 0m2.603s 00:06:41.440 user 0m2.957s 00:06:41.440 sys 0m0.675s 00:06:41.440 13:05:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.440 13:05:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.440 ************************************ 00:06:41.440 END TEST locking_app_on_locked_coremask 00:06:41.440 ************************************ 00:06:41.440 13:05:38 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:41.440 13:05:38 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:41.440 13:05:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.440 13:05:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.440 ************************************ 00:06:41.440 START TEST locking_overlapped_coremask 00:06:41.440 ************************************ 00:06:41.440 13:05:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:41.440 13:05:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=75367 00:06:41.440 13:05:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 75367 /var/tmp/spdk.sock 00:06:41.440 13:05:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:41.440 13:05:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 75367 ']' 00:06:41.440 13:05:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.440 13:05:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:41.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.440 13:05:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.440 13:05:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:41.440 13:05:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.698 [2024-07-15 13:05:38.193344] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:41.698 [2024-07-15 13:05:38.193457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75367 ] 00:06:41.698 [2024-07-15 13:05:38.329058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.698 [2024-07-15 13:05:38.426479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.698 [2024-07-15 13:05:38.426585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.698 [2024-07-15 13:05:38.426588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.635 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:42.635 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:42.635 13:05:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=75402 00:06:42.635 13:05:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:42.635 13:05:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 75402 /var/tmp/spdk2.sock 00:06:42.635 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:42.635 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75402 /var/tmp/spdk2.sock 00:06:42.635 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:42.635 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.635 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:42.635 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.635 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 75402 /var/tmp/spdk2.sock 00:06:42.635 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 75402 ']' 00:06:42.635 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.635 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.635 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.635 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.635 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.635 [2024-07-15 13:05:39.234280] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:42.635 [2024-07-15 13:05:39.234390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75402 ] 00:06:42.893 [2024-07-15 13:05:39.379018] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75367 has claimed it. 00:06:42.893 [2024-07-15 13:05:39.383235] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:43.460 ERROR: process (pid: 75402) is no longer running 00:06:43.460 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (75402) - No such process 00:06:43.460 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.460 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:43.460 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:43.460 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:43.460 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:43.460 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:43.460 13:05:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:43.460 13:05:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:43.460 13:05:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:43.460 13:05:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:43.460 13:05:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 75367 00:06:43.460 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 75367 ']' 00:06:43.460 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 75367 00:06:43.460 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:43.460 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:43.460 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75367 00:06:43.460 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:43.460 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:43.460 killing process with pid 75367 00:06:43.460 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75367' 00:06:43.460 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 75367 00:06:43.461 13:05:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 75367 00:06:43.720 00:06:43.720 real 0m2.216s 00:06:43.720 user 0m6.173s 00:06:43.720 sys 0m0.463s 00:06:43.720 13:05:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.720 13:05:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.720 ************************************ 00:06:43.720 END TEST locking_overlapped_coremask 00:06:43.720 ************************************ 00:06:43.720 13:05:40 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:43.720 13:05:40 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:43.720 13:05:40 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.720 13:05:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.720 ************************************ 00:06:43.720 START TEST locking_overlapped_coremask_via_rpc 00:06:43.720 ************************************ 00:06:43.720 13:05:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:43.720 13:05:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=75449 00:06:43.720 13:05:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 75449 /var/tmp/spdk.sock 00:06:43.720 13:05:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:43.720 13:05:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75449 ']' 00:06:43.720 13:05:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.720 13:05:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:43.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.720 13:05:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.720 13:05:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:43.720 13:05:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.720 [2024-07-15 13:05:40.455566] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:43.720 [2024-07-15 13:05:40.455664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75449 ] 00:06:43.978 [2024-07-15 13:05:40.587332] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.978 [2024-07-15 13:05:40.587383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.978 [2024-07-15 13:05:40.682396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.978 [2024-07-15 13:05:40.682472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.978 [2024-07-15 13:05:40.682475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.933 13:05:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:44.933 13:05:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:44.933 13:05:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=75479 00:06:44.933 13:05:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 75479 /var/tmp/spdk2.sock 00:06:44.933 13:05:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:44.933 13:05:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75479 ']' 00:06:44.933 13:05:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.933 13:05:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:44.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.933 13:05:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.933 13:05:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:44.933 13:05:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.933 [2024-07-15 13:05:41.462172] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:44.933 [2024-07-15 13:05:41.462292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75479 ] 00:06:44.933 [2024-07-15 13:05:41.610617] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.933 [2024-07-15 13:05:41.610849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.191 [2024-07-15 13:05:41.780099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.191 [2024-07-15 13:05:41.780378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:45.191 [2024-07-15 13:05:41.780597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.758 [2024-07-15 13:05:42.482340] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75449 has claimed it. 00:06:45.758 2024/07/15 13:05:42 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:45.758 request: 00:06:45.758 { 00:06:45.758 "method": "framework_enable_cpumask_locks", 00:06:45.758 "params": {} 00:06:45.758 } 00:06:45.758 Got JSON-RPC error response 00:06:45.758 GoRPCClient: error on JSON-RPC call 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 75449 /var/tmp/spdk.sock 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75449 ']' 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:45.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.758 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:46.017 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.276 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:46.276 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:46.276 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 75479 /var/tmp/spdk2.sock 00:06:46.276 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75479 ']' 00:06:46.276 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.276 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:46.276 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.276 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:46.276 13:05:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.535 13:05:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:46.535 13:05:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:46.535 13:05:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:46.535 13:05:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:46.535 13:05:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:46.535 13:05:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:46.535 00:06:46.535 real 0m2.625s 00:06:46.535 user 0m1.300s 00:06:46.535 sys 0m0.263s 00:06:46.535 13:05:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.535 13:05:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.535 ************************************ 00:06:46.535 END TEST locking_overlapped_coremask_via_rpc 00:06:46.535 ************************************ 00:06:46.535 13:05:43 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:46.535 13:05:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 75449 ]] 00:06:46.535 13:05:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 75449 00:06:46.535 13:05:43 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 75449 ']' 00:06:46.535 13:05:43 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 75449 00:06:46.535 13:05:43 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:46.535 13:05:43 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:46.535 13:05:43 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75449 00:06:46.535 13:05:43 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:46.535 13:05:43 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:46.535 killing process with pid 75449 00:06:46.535 13:05:43 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75449' 00:06:46.535 13:05:43 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 75449 00:06:46.535 13:05:43 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 75449 00:06:46.793 13:05:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 75479 ]] 00:06:46.793 13:05:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 75479 00:06:46.793 13:05:43 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 75479 ']' 00:06:46.793 13:05:43 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 75479 00:06:46.793 13:05:43 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:46.793 13:05:43 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:46.793 13:05:43 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75479 00:06:46.793 13:05:43 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:46.793 13:05:43 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:46.793 killing process with pid 75479 00:06:46.793 13:05:43 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75479' 00:06:46.793 13:05:43 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 75479 00:06:46.793 13:05:43 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 75479 00:06:47.358 13:05:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:47.358 13:05:43 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:47.358 13:05:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 75449 ]] 00:06:47.358 13:05:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 75449 00:06:47.358 13:05:43 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 75449 ']' 00:06:47.359 13:05:43 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 75449 00:06:47.359 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (75449) - No such process 00:06:47.359 Process with pid 75449 is not found 00:06:47.359 13:05:43 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 75449 is not found' 00:06:47.359 13:05:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 75479 ]] 00:06:47.359 13:05:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 75479 00:06:47.359 13:05:43 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 75479 ']' 00:06:47.359 13:05:43 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 75479 00:06:47.359 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (75479) - No such process 00:06:47.359 Process with pid 75479 is not found 00:06:47.359 13:05:43 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 75479 is not found' 00:06:47.359 13:05:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:47.359 00:06:47.359 real 0m20.699s 00:06:47.359 user 0m35.971s 00:06:47.359 sys 0m5.727s 00:06:47.359 13:05:43 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.359 13:05:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.359 ************************************ 00:06:47.359 END TEST cpu_locks 00:06:47.359 ************************************ 00:06:47.359 00:06:47.359 real 0m48.193s 00:06:47.359 user 1m33.038s 00:06:47.359 sys 0m9.654s 00:06:47.359 ************************************ 00:06:47.359 END TEST event 00:06:47.359 ************************************ 00:06:47.359 13:05:43 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.359 13:05:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.359 13:05:43 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:47.359 13:05:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:47.359 13:05:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.359 13:05:43 -- common/autotest_common.sh@10 -- # set +x 00:06:47.359 ************************************ 00:06:47.359 START TEST thread 00:06:47.359 ************************************ 00:06:47.359 13:05:43 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:47.359 * Looking for test storage... 00:06:47.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:47.359 13:05:44 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:47.359 13:05:44 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:47.359 13:05:44 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.359 13:05:44 thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.359 ************************************ 00:06:47.359 START TEST thread_poller_perf 00:06:47.359 ************************************ 00:06:47.359 13:05:44 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:47.359 [2024-07-15 13:05:44.068037] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:47.359 [2024-07-15 13:05:44.068146] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75625 ] 00:06:47.618 [2024-07-15 13:05:44.204795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.618 [2024-07-15 13:05:44.308264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.618 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:48.992 ====================================== 00:06:48.992 busy:2209432230 (cyc) 00:06:48.992 total_run_count: 306000 00:06:48.992 tsc_hz: 2200000000 (cyc) 00:06:48.992 ====================================== 00:06:48.992 poller_cost: 7220 (cyc), 3281 (nsec) 00:06:48.992 00:06:48.992 real 0m1.341s 00:06:48.992 user 0m1.175s 00:06:48.992 sys 0m0.054s 00:06:48.992 13:05:45 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.992 13:05:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:48.992 ************************************ 00:06:48.992 END TEST thread_poller_perf 00:06:48.992 ************************************ 00:06:48.992 13:05:45 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:48.992 13:05:45 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:48.992 13:05:45 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.992 13:05:45 thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.992 ************************************ 00:06:48.992 START TEST thread_poller_perf 00:06:48.992 ************************************ 00:06:48.992 13:05:45 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:48.992 [2024-07-15 13:05:45.465666] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:48.992 [2024-07-15 13:05:45.465757] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75661 ] 00:06:48.992 [2024-07-15 13:05:45.604530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.992 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:48.993 [2024-07-15 13:05:45.697633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.366 ====================================== 00:06:50.366 busy:2202727467 (cyc) 00:06:50.366 total_run_count: 3984000 00:06:50.366 tsc_hz: 2200000000 (cyc) 00:06:50.366 ====================================== 00:06:50.366 poller_cost: 552 (cyc), 250 (nsec) 00:06:50.366 ************************************ 00:06:50.366 END TEST thread_poller_perf 00:06:50.366 ************************************ 00:06:50.366 00:06:50.366 real 0m1.333s 00:06:50.367 user 0m1.156s 00:06:50.367 sys 0m0.068s 00:06:50.367 13:05:46 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.367 13:05:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:50.367 13:05:46 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:50.367 ************************************ 00:06:50.367 END TEST thread 00:06:50.367 ************************************ 00:06:50.367 00:06:50.367 real 0m2.866s 00:06:50.367 user 0m2.399s 00:06:50.367 sys 0m0.237s 00:06:50.367 13:05:46 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.367 13:05:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.367 13:05:46 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:50.367 13:05:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:50.367 13:05:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.367 13:05:46 -- common/autotest_common.sh@10 -- # set +x 00:06:50.367 ************************************ 00:06:50.367 START TEST accel 00:06:50.367 ************************************ 00:06:50.367 13:05:46 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:50.367 * Looking for test storage... 00:06:50.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:50.367 13:05:46 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:50.367 13:05:46 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:50.367 13:05:46 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:50.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.367 13:05:46 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=75735 00:06:50.367 13:05:46 accel -- accel/accel.sh@63 -- # waitforlisten 75735 00:06:50.367 13:05:46 accel -- common/autotest_common.sh@827 -- # '[' -z 75735 ']' 00:06:50.367 13:05:46 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.367 13:05:46 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:50.367 13:05:46 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:50.367 13:05:46 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:50.367 13:05:46 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.367 13:05:46 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.367 13:05:46 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:50.367 13:05:46 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.367 13:05:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.367 13:05:46 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.367 13:05:46 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.367 13:05:46 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.367 13:05:46 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:50.367 13:05:46 accel -- accel/accel.sh@41 -- # jq -r . 00:06:50.367 [2024-07-15 13:05:47.022949] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:50.367 [2024-07-15 13:05:47.023066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75735 ] 00:06:50.625 [2024-07-15 13:05:47.155011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.625 [2024-07-15 13:05:47.257542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.560 13:05:48 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.560 13:05:48 accel -- common/autotest_common.sh@860 -- # return 0 00:06:51.560 13:05:48 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:51.560 13:05:48 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:51.560 13:05:48 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:51.560 13:05:48 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:51.560 13:05:48 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:51.560 13:05:48 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:51.560 13:05:48 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.560 13:05:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.560 13:05:48 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:51.560 13:05:48 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.560 13:05:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.560 13:05:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.560 13:05:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.560 13:05:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.560 13:05:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.560 13:05:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.560 13:05:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.560 13:05:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.560 13:05:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.560 13:05:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.560 13:05:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.560 13:05:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.560 13:05:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.560 13:05:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.560 13:05:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.560 13:05:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.560 13:05:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.560 13:05:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.560 13:05:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.560 13:05:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.560 13:05:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.560 13:05:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.560 13:05:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.560 13:05:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.560 13:05:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.560 13:05:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.560 13:05:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.560 13:05:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.560 13:05:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.560 13:05:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.560 13:05:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.560 13:05:48 accel -- accel/accel.sh@75 -- # killprocess 75735 00:06:51.560 13:05:48 accel -- common/autotest_common.sh@946 -- # '[' -z 75735 ']' 00:06:51.560 13:05:48 accel -- common/autotest_common.sh@950 -- # kill -0 75735 00:06:51.560 13:05:48 accel -- common/autotest_common.sh@951 -- # uname 00:06:51.560 13:05:48 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:51.560 13:05:48 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75735 00:06:51.560 13:05:48 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:51.560 13:05:48 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:51.560 13:05:48 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75735' 00:06:51.560 killing process with pid 75735 00:06:51.560 13:05:48 accel -- common/autotest_common.sh@965 -- # kill 75735 00:06:51.560 13:05:48 accel -- common/autotest_common.sh@970 -- # wait 75735 00:06:51.819 13:05:48 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:51.819 13:05:48 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:51.819 13:05:48 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:51.819 13:05:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.819 13:05:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.819 13:05:48 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:51.819 13:05:48 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:51.819 13:05:48 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:51.819 13:05:48 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.819 13:05:48 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.819 13:05:48 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.819 13:05:48 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.819 13:05:48 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.819 13:05:48 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:51.819 13:05:48 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:52.078 13:05:48 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.078 13:05:48 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:52.078 13:05:48 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:52.078 13:05:48 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:52.078 13:05:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.078 13:05:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.078 ************************************ 00:06:52.078 START TEST accel_missing_filename 00:06:52.078 ************************************ 00:06:52.078 13:05:48 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:52.078 13:05:48 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:52.078 13:05:48 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:52.078 13:05:48 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:52.078 13:05:48 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.078 13:05:48 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:52.078 13:05:48 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.078 13:05:48 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:52.078 13:05:48 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:52.078 13:05:48 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:52.078 13:05:48 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.078 13:05:48 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.078 13:05:48 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.078 13:05:48 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.078 13:05:48 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.078 13:05:48 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:52.078 13:05:48 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:52.078 [2024-07-15 13:05:48.631017] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:52.078 [2024-07-15 13:05:48.631103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75805 ] 00:06:52.078 [2024-07-15 13:05:48.773967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.337 [2024-07-15 13:05:48.864464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.337 [2024-07-15 13:05:48.925368] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.337 [2024-07-15 13:05:49.006937] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:52.596 A filename is required. 00:06:52.596 13:05:49 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:52.596 13:05:49 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:52.596 13:05:49 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:52.596 ************************************ 00:06:52.596 END TEST accel_missing_filename 00:06:52.596 ************************************ 00:06:52.596 13:05:49 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:52.596 13:05:49 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:52.596 13:05:49 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:52.596 00:06:52.596 real 0m0.479s 00:06:52.596 user 0m0.284s 00:06:52.596 sys 0m0.136s 00:06:52.596 13:05:49 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.596 13:05:49 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:52.596 13:05:49 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.596 13:05:49 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:52.596 13:05:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.596 13:05:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.596 ************************************ 00:06:52.596 START TEST accel_compress_verify 00:06:52.596 ************************************ 00:06:52.596 13:05:49 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.596 13:05:49 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:52.596 13:05:49 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.596 13:05:49 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:52.596 13:05:49 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.596 13:05:49 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:52.596 13:05:49 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.596 13:05:49 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.596 13:05:49 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.596 13:05:49 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:52.596 13:05:49 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.596 13:05:49 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.596 13:05:49 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.596 13:05:49 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.596 13:05:49 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.596 13:05:49 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:52.596 13:05:49 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:52.596 [2024-07-15 13:05:49.162772] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:52.596 [2024-07-15 13:05:49.162864] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75829 ] 00:06:52.596 [2024-07-15 13:05:49.301274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.856 [2024-07-15 13:05:49.396139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.856 [2024-07-15 13:05:49.456161] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.856 [2024-07-15 13:05:49.535942] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:53.115 00:06:53.115 Compression does not support the verify option, aborting. 00:06:53.115 ************************************ 00:06:53.115 END TEST accel_compress_verify 00:06:53.115 ************************************ 00:06:53.115 13:05:49 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:53.115 13:05:49 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.115 13:05:49 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:53.115 13:05:49 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:53.115 13:05:49 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:53.115 13:05:49 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.115 00:06:53.115 real 0m0.501s 00:06:53.115 user 0m0.315s 00:06:53.115 sys 0m0.126s 00:06:53.115 13:05:49 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.115 13:05:49 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:53.115 13:05:49 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:53.115 13:05:49 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:53.115 13:05:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.115 13:05:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.115 ************************************ 00:06:53.115 START TEST accel_wrong_workload 00:06:53.115 ************************************ 00:06:53.115 13:05:49 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:53.115 13:05:49 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:53.115 13:05:49 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:53.115 13:05:49 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:53.115 13:05:49 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.115 13:05:49 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:53.115 13:05:49 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.115 13:05:49 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:53.115 13:05:49 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:53.115 13:05:49 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:53.115 13:05:49 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.115 13:05:49 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.115 13:05:49 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.115 13:05:49 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.115 13:05:49 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.115 13:05:49 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:53.115 13:05:49 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:53.115 Unsupported workload type: foobar 00:06:53.115 [2024-07-15 13:05:49.715015] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:53.115 accel_perf options: 00:06:53.115 [-h help message] 00:06:53.115 [-q queue depth per core] 00:06:53.115 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:53.115 [-T number of threads per core 00:06:53.115 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:53.115 [-t time in seconds] 00:06:53.115 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:53.115 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:53.115 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:53.115 [-l for compress/decompress workloads, name of uncompressed input file 00:06:53.115 [-S for crc32c workload, use this seed value (default 0) 00:06:53.115 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:53.115 [-f for fill workload, use this BYTE value (default 255) 00:06:53.115 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:53.115 [-y verify result if this switch is on] 00:06:53.115 [-a tasks to allocate per core (default: same value as -q)] 00:06:53.115 Can be used to spread operations across a wider range of memory. 00:06:53.115 13:05:49 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:53.115 13:05:49 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.115 13:05:49 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:53.115 13:05:49 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.115 00:06:53.115 real 0m0.030s 00:06:53.115 user 0m0.014s 00:06:53.115 sys 0m0.014s 00:06:53.115 ************************************ 00:06:53.115 END TEST accel_wrong_workload 00:06:53.115 ************************************ 00:06:53.115 13:05:49 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.115 13:05:49 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:53.115 13:05:49 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:53.115 13:05:49 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:53.115 13:05:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.115 13:05:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.115 ************************************ 00:06:53.115 START TEST accel_negative_buffers 00:06:53.115 ************************************ 00:06:53.115 13:05:49 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:53.115 13:05:49 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:53.115 13:05:49 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:53.115 13:05:49 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:53.115 13:05:49 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.115 13:05:49 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:53.115 13:05:49 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.115 13:05:49 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:53.115 13:05:49 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:53.115 13:05:49 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:53.115 13:05:49 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.115 13:05:49 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.115 13:05:49 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.115 13:05:49 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.115 13:05:49 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.115 13:05:49 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:53.115 13:05:49 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:53.115 -x option must be non-negative. 00:06:53.115 [2024-07-15 13:05:49.791852] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:53.115 accel_perf options: 00:06:53.115 [-h help message] 00:06:53.115 [-q queue depth per core] 00:06:53.115 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:53.115 [-T number of threads per core 00:06:53.115 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:53.115 [-t time in seconds] 00:06:53.115 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:53.115 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:53.115 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:53.115 [-l for compress/decompress workloads, name of uncompressed input file 00:06:53.115 [-S for crc32c workload, use this seed value (default 0) 00:06:53.115 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:53.115 [-f for fill workload, use this BYTE value (default 255) 00:06:53.115 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:53.115 [-y verify result if this switch is on] 00:06:53.115 [-a tasks to allocate per core (default: same value as -q)] 00:06:53.115 Can be used to spread operations across a wider range of memory. 00:06:53.115 13:05:49 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:53.115 13:05:49 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.115 13:05:49 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:53.115 ************************************ 00:06:53.115 END TEST accel_negative_buffers 00:06:53.115 ************************************ 00:06:53.115 13:05:49 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.115 00:06:53.115 real 0m0.030s 00:06:53.115 user 0m0.017s 00:06:53.115 sys 0m0.012s 00:06:53.115 13:05:49 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.115 13:05:49 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:53.115 13:05:49 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:53.115 13:05:49 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:53.115 13:05:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.115 13:05:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.116 ************************************ 00:06:53.116 START TEST accel_crc32c 00:06:53.116 ************************************ 00:06:53.116 13:05:49 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:53.116 13:05:49 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:53.116 13:05:49 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:53.116 13:05:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.116 13:05:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.116 13:05:49 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:53.116 13:05:49 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:53.116 13:05:49 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:53.116 13:05:49 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.116 13:05:49 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.116 13:05:49 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.116 13:05:49 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.116 13:05:49 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.116 13:05:49 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:53.116 13:05:49 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:53.374 [2024-07-15 13:05:49.870046] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:53.374 [2024-07-15 13:05:49.870134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75892 ] 00:06:53.374 [2024-07-15 13:05:50.006661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.374 [2024-07-15 13:05:50.084498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.633 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.634 13:05:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.009 13:05:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.009 13:05:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.009 13:05:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.009 13:05:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.009 13:05:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.009 13:05:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.009 13:05:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.009 13:05:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.009 13:05:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.009 13:05:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.009 13:05:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.010 13:05:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.010 13:05:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.010 13:05:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.010 13:05:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.010 13:05:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.010 13:05:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.010 13:05:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.010 13:05:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.010 13:05:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.010 ************************************ 00:06:55.010 END TEST accel_crc32c 00:06:55.010 ************************************ 00:06:55.010 13:05:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.010 13:05:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:55.010 13:05:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.010 00:06:55.010 real 0m1.475s 00:06:55.010 user 0m1.245s 00:06:55.010 sys 0m0.128s 00:06:55.010 13:05:51 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.010 13:05:51 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:55.010 13:05:51 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:55.010 13:05:51 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:55.010 13:05:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.010 13:05:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.010 ************************************ 00:06:55.010 START TEST accel_crc32c_C2 00:06:55.010 ************************************ 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:55.010 [2024-07-15 13:05:51.389476] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:55.010 [2024-07-15 13:05:51.389552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75922 ] 00:06:55.010 [2024-07-15 13:05:51.519524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.010 [2024-07-15 13:05:51.612908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.010 13:05:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.386 ************************************ 00:06:56.386 END TEST accel_crc32c_C2 00:06:56.386 ************************************ 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.386 00:06:56.386 real 0m1.482s 00:06:56.386 user 0m1.259s 00:06:56.386 sys 0m0.124s 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.386 13:05:52 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:56.386 13:05:52 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:56.386 13:05:52 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:56.386 13:05:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.386 13:05:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.386 ************************************ 00:06:56.386 START TEST accel_copy 00:06:56.386 ************************************ 00:06:56.386 13:05:52 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:56.386 13:05:52 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:56.386 13:05:52 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:56.387 13:05:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.387 13:05:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.387 13:05:52 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:56.387 13:05:52 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:56.387 13:05:52 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:56.387 13:05:52 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.387 13:05:52 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.387 13:05:52 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.387 13:05:52 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.387 13:05:52 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.387 13:05:52 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:56.387 13:05:52 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:56.387 [2024-07-15 13:05:52.918949] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:56.387 [2024-07-15 13:05:52.919038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75957 ] 00:06:56.387 [2024-07-15 13:05:53.055746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.654 [2024-07-15 13:05:53.158241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.654 13:05:53 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.655 13:05:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:58.034 13:05:54 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.034 00:06:58.034 real 0m1.495s 00:06:58.034 user 0m1.272s 00:06:58.034 sys 0m0.122s 00:06:58.034 13:05:54 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.034 13:05:54 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:58.034 ************************************ 00:06:58.034 END TEST accel_copy 00:06:58.034 ************************************ 00:06:58.034 13:05:54 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:58.034 13:05:54 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:58.034 13:05:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.034 13:05:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.034 ************************************ 00:06:58.034 START TEST accel_fill 00:06:58.034 ************************************ 00:06:58.034 13:05:54 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:58.034 [2024-07-15 13:05:54.459329] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:58.034 [2024-07-15 13:05:54.459414] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75991 ] 00:06:58.034 [2024-07-15 13:05:54.596782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.034 [2024-07-15 13:05:54.703659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.034 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.293 13:05:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:59.228 ************************************ 00:06:59.228 END TEST accel_fill 00:06:59.228 ************************************ 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:59.228 13:05:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.229 13:05:55 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.229 13:05:55 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:59.229 13:05:55 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.229 00:06:59.229 real 0m1.495s 00:06:59.229 user 0m1.279s 00:06:59.229 sys 0m0.116s 00:06:59.229 13:05:55 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.229 13:05:55 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:59.487 13:05:55 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:59.487 13:05:55 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:59.487 13:05:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.487 13:05:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.487 ************************************ 00:06:59.487 START TEST accel_copy_crc32c 00:06:59.487 ************************************ 00:06:59.487 13:05:55 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:59.487 13:05:55 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:59.487 13:05:55 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:59.487 13:05:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.487 13:05:55 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:59.487 13:05:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.487 13:05:55 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:59.487 13:05:55 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:59.487 13:05:55 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.487 13:05:55 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.487 13:05:55 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.487 13:05:55 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.487 13:05:55 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.487 13:05:55 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:59.487 13:05:55 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:59.487 [2024-07-15 13:05:56.010936] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:59.487 [2024-07-15 13:05:56.011038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76026 ] 00:06:59.487 [2024-07-15 13:05:56.149082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.746 [2024-07-15 13:05:56.263337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.746 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.747 13:05:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.121 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.121 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.121 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.121 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.121 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.121 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.121 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.121 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.121 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.121 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.122 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.122 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.122 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.122 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.122 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.122 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.122 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.122 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.122 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.122 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.122 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.122 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.122 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.122 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.122 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.122 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:01.122 13:05:57 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.122 00:07:01.122 real 0m1.515s 00:07:01.122 user 0m1.278s 00:07:01.122 sys 0m0.136s 00:07:01.122 13:05:57 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.122 13:05:57 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:01.122 ************************************ 00:07:01.122 END TEST accel_copy_crc32c 00:07:01.122 ************************************ 00:07:01.122 13:05:57 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:01.122 13:05:57 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:01.122 13:05:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.122 13:05:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.122 ************************************ 00:07:01.122 START TEST accel_copy_crc32c_C2 00:07:01.122 ************************************ 00:07:01.122 13:05:57 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:01.122 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.122 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:01.122 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.122 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.122 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:01.122 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:01.122 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.122 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.122 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.122 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.122 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.122 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.122 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:01.122 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:01.122 [2024-07-15 13:05:57.572029] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:01.122 [2024-07-15 13:05:57.572155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76065 ] 00:07:01.122 [2024-07-15 13:05:57.711579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.122 [2024-07-15 13:05:57.804994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.380 13:05:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.319 00:07:02.319 real 0m1.478s 00:07:02.319 user 0m1.266s 00:07:02.319 sys 0m0.121s 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.319 13:05:59 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:02.319 ************************************ 00:07:02.319 END TEST accel_copy_crc32c_C2 00:07:02.319 ************************************ 00:07:02.576 13:05:59 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:02.576 13:05:59 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:02.576 13:05:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.576 13:05:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.576 ************************************ 00:07:02.576 START TEST accel_dualcast 00:07:02.576 ************************************ 00:07:02.576 13:05:59 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:02.576 13:05:59 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:02.576 13:05:59 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:02.576 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.576 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.576 13:05:59 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:02.576 13:05:59 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:02.576 13:05:59 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:02.576 13:05:59 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.576 13:05:59 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.576 13:05:59 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.576 13:05:59 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.576 13:05:59 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.576 13:05:59 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:02.576 13:05:59 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:02.576 [2024-07-15 13:05:59.101188] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:02.576 [2024-07-15 13:05:59.101292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76095 ] 00:07:02.576 [2024-07-15 13:05:59.239587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.834 [2024-07-15 13:05:59.327452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:02.834 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.835 13:05:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.811 13:06:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.812 13:06:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.812 13:06:00 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.812 13:06:00 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:03.812 13:06:00 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.812 00:07:03.812 real 0m1.471s 00:07:03.812 user 0m1.252s 00:07:03.812 sys 0m0.124s 00:07:03.812 13:06:00 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.812 13:06:00 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:03.812 ************************************ 00:07:03.812 END TEST accel_dualcast 00:07:03.812 ************************************ 00:07:04.069 13:06:00 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:04.069 13:06:00 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:04.069 13:06:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.069 13:06:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.069 ************************************ 00:07:04.069 START TEST accel_compare 00:07:04.069 ************************************ 00:07:04.069 13:06:00 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:04.069 13:06:00 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:04.069 13:06:00 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:04.069 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.069 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.069 13:06:00 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:04.069 13:06:00 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:04.069 13:06:00 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:04.069 13:06:00 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.069 13:06:00 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.069 13:06:00 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.069 13:06:00 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.069 13:06:00 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.069 13:06:00 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:04.069 13:06:00 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:04.069 [2024-07-15 13:06:00.623830] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:04.069 [2024-07-15 13:06:00.623933] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76130 ] 00:07:04.069 [2024-07-15 13:06:00.762640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.327 [2024-07-15 13:06:00.871502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.327 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.328 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.328 13:06:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:04.328 13:06:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.328 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.328 13:06:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:05.718 13:06:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.718 00:07:05.718 real 0m1.511s 00:07:05.718 user 0m1.287s 00:07:05.718 sys 0m0.132s 00:07:05.718 13:06:02 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.718 13:06:02 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:05.718 ************************************ 00:07:05.718 END TEST accel_compare 00:07:05.718 ************************************ 00:07:05.718 13:06:02 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:05.718 13:06:02 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:05.718 13:06:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.718 13:06:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.718 ************************************ 00:07:05.718 START TEST accel_xor 00:07:05.718 ************************************ 00:07:05.718 13:06:02 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:05.718 13:06:02 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:05.718 13:06:02 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:05.718 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.718 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.718 13:06:02 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:05.718 13:06:02 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:05.718 13:06:02 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:05.718 13:06:02 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.718 13:06:02 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.718 13:06:02 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.718 13:06:02 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.718 13:06:02 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.718 13:06:02 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:05.718 13:06:02 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:05.718 [2024-07-15 13:06:02.188486] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:05.718 [2024-07-15 13:06:02.188605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76164 ] 00:07:05.718 [2024-07-15 13:06:02.321235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.718 [2024-07-15 13:06:02.424536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 13:06:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:06.911 13:06:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.911 00:07:06.911 real 0m1.473s 00:07:06.911 user 0m0.014s 00:07:06.911 sys 0m0.002s 00:07:06.911 13:06:03 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.911 13:06:03 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:06.911 ************************************ 00:07:06.911 END TEST accel_xor 00:07:06.911 ************************************ 00:07:07.169 13:06:03 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:07.169 13:06:03 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:07.169 13:06:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.169 13:06:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.169 ************************************ 00:07:07.169 START TEST accel_xor 00:07:07.169 ************************************ 00:07:07.169 13:06:03 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:07.169 13:06:03 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:07.169 13:06:03 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:07.169 13:06:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.169 13:06:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.169 13:06:03 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:07.169 13:06:03 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:07.169 13:06:03 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:07.169 13:06:03 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.169 13:06:03 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.169 13:06:03 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.169 13:06:03 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.169 13:06:03 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.169 13:06:03 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:07.169 13:06:03 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:07.169 [2024-07-15 13:06:03.709717] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:07.169 [2024-07-15 13:06:03.709838] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76204 ] 00:07:07.169 [2024-07-15 13:06:03.848247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.427 [2024-07-15 13:06:03.958137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.427 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.428 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.428 13:06:04 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:07.428 13:06:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.428 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.428 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.428 13:06:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.428 13:06:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.428 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.428 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.428 13:06:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.428 13:06:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.428 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.428 13:06:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.802 13:06:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.802 ************************************ 00:07:08.802 END TEST accel_xor 00:07:08.802 ************************************ 00:07:08.803 13:06:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.803 13:06:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:08.803 13:06:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.803 00:07:08.803 real 0m1.495s 00:07:08.803 user 0m1.279s 00:07:08.803 sys 0m0.122s 00:07:08.803 13:06:05 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.803 13:06:05 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:08.803 13:06:05 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:08.803 13:06:05 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:08.803 13:06:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.803 13:06:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.803 ************************************ 00:07:08.803 START TEST accel_dif_verify 00:07:08.803 ************************************ 00:07:08.803 13:06:05 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:08.803 13:06:05 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:08.803 13:06:05 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:08.803 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.803 13:06:05 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:08.803 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.803 13:06:05 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:08.803 13:06:05 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:08.803 13:06:05 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.803 13:06:05 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.803 13:06:05 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.803 13:06:05 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.803 13:06:05 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.803 13:06:05 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:08.803 13:06:05 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:08.803 [2024-07-15 13:06:05.263189] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:08.803 [2024-07-15 13:06:05.263317] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76233 ] 00:07:08.803 [2024-07-15 13:06:05.400651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.803 [2024-07-15 13:06:05.509986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.061 13:06:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:09.061 13:06:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.062 13:06:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.997 13:06:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:09.997 13:06:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.997 13:06:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.997 13:06:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.997 13:06:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:09.997 13:06:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.997 13:06:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.997 13:06:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.997 13:06:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:09.997 13:06:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.997 13:06:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.997 13:06:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.997 13:06:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:09.997 13:06:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.997 13:06:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.998 13:06:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.998 13:06:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:09.998 13:06:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.998 13:06:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.998 13:06:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.998 13:06:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:09.998 13:06:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.998 13:06:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.998 13:06:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.998 13:06:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.998 13:06:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:09.998 13:06:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.998 00:07:09.998 real 0m1.496s 00:07:09.998 user 0m1.279s 00:07:09.998 sys 0m0.119s 00:07:09.998 13:06:06 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.998 13:06:06 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:09.998 ************************************ 00:07:09.998 END TEST accel_dif_verify 00:07:09.998 ************************************ 00:07:10.256 13:06:06 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:10.256 13:06:06 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:10.256 13:06:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.256 13:06:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.256 ************************************ 00:07:10.256 START TEST accel_dif_generate 00:07:10.256 ************************************ 00:07:10.257 13:06:06 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:10.257 13:06:06 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:10.257 13:06:06 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:10.257 13:06:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.257 13:06:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.257 13:06:06 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:10.257 13:06:06 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:10.257 13:06:06 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:10.257 13:06:06 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.257 13:06:06 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.257 13:06:06 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.257 13:06:06 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.257 13:06:06 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.257 13:06:06 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:10.257 13:06:06 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:10.257 [2024-07-15 13:06:06.798785] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:10.257 [2024-07-15 13:06:06.799365] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76273 ] 00:07:10.257 [2024-07-15 13:06:06.935644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.517 [2024-07-15 13:06:07.024211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.517 13:06:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:11.918 13:06:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.918 00:07:11.918 real 0m1.477s 00:07:11.918 user 0m1.263s 00:07:11.918 sys 0m0.116s 00:07:11.918 13:06:08 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.918 13:06:08 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:11.918 ************************************ 00:07:11.918 END TEST accel_dif_generate 00:07:11.918 ************************************ 00:07:11.918 13:06:08 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:11.918 13:06:08 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:11.918 13:06:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.918 13:06:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.918 ************************************ 00:07:11.918 START TEST accel_dif_generate_copy 00:07:11.918 ************************************ 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:11.918 [2024-07-15 13:06:08.325177] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:11.918 [2024-07-15 13:06:08.325309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76302 ] 00:07:11.918 [2024-07-15 13:06:08.455679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.918 [2024-07-15 13:06:08.545326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:11.918 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 13:06:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.293 00:07:13.293 real 0m1.455s 00:07:13.293 user 0m1.238s 00:07:13.293 sys 0m0.118s 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.293 13:06:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:13.293 ************************************ 00:07:13.293 END TEST accel_dif_generate_copy 00:07:13.293 ************************************ 00:07:13.293 13:06:09 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:13.293 13:06:09 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:13.293 13:06:09 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:13.293 13:06:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.293 13:06:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.293 ************************************ 00:07:13.293 START TEST accel_comp 00:07:13.293 ************************************ 00:07:13.293 13:06:09 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:13.293 13:06:09 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:13.293 13:06:09 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:13.293 13:06:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.293 13:06:09 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:13.293 13:06:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.293 13:06:09 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:13.293 13:06:09 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:13.293 13:06:09 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.293 13:06:09 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.293 13:06:09 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.293 13:06:09 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.293 13:06:09 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.293 13:06:09 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:13.293 13:06:09 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:13.293 [2024-07-15 13:06:09.826961] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:13.293 [2024-07-15 13:06:09.827055] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76344 ] 00:07:13.293 [2024-07-15 13:06:09.964617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.552 [2024-07-15 13:06:10.042289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.552 13:06:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:14.929 13:06:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.929 00:07:14.929 real 0m1.462s 00:07:14.929 user 0m1.247s 00:07:14.929 sys 0m0.120s 00:07:14.929 ************************************ 00:07:14.929 END TEST accel_comp 00:07:14.929 ************************************ 00:07:14.929 13:06:11 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.929 13:06:11 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:14.929 13:06:11 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:14.929 13:06:11 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:14.929 13:06:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.929 13:06:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.929 ************************************ 00:07:14.929 START TEST accel_decomp 00:07:14.929 ************************************ 00:07:14.929 13:06:11 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:14.929 [2024-07-15 13:06:11.340393] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:14.929 [2024-07-15 13:06:11.340493] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76373 ] 00:07:14.929 [2024-07-15 13:06:11.472485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.929 [2024-07-15 13:06:11.569927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.929 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.930 13:06:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.305 ************************************ 00:07:16.305 END TEST accel_decomp 00:07:16.305 ************************************ 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:16.305 13:06:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.305 00:07:16.305 real 0m1.557s 00:07:16.305 user 0m1.358s 00:07:16.305 sys 0m0.106s 00:07:16.305 13:06:12 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.305 13:06:12 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:16.305 13:06:12 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:16.305 13:06:12 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:16.305 13:06:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.305 13:06:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.305 ************************************ 00:07:16.305 START TEST accel_decmop_full 00:07:16.305 ************************************ 00:07:16.305 13:06:12 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:16.305 13:06:12 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:16.305 13:06:12 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:16.305 13:06:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.305 13:06:12 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:16.305 13:06:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.305 13:06:12 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:16.305 13:06:12 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:16.305 13:06:12 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.305 13:06:12 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.305 13:06:12 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.305 13:06:12 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.305 13:06:12 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.305 13:06:12 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:16.305 13:06:12 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:16.305 [2024-07-15 13:06:12.938496] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:16.305 [2024-07-15 13:06:12.939173] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76412 ] 00:07:16.579 [2024-07-15 13:06:13.070414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.579 [2024-07-15 13:06:13.160972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.579 13:06:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:18.024 13:06:14 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.024 00:07:18.024 real 0m1.488s 00:07:18.024 user 0m1.273s 00:07:18.024 sys 0m0.120s 00:07:18.024 13:06:14 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:18.024 13:06:14 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:18.024 ************************************ 00:07:18.024 END TEST accel_decmop_full 00:07:18.024 ************************************ 00:07:18.024 13:06:14 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:18.024 13:06:14 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:18.024 13:06:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:18.024 13:06:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.024 ************************************ 00:07:18.024 START TEST accel_decomp_mcore 00:07:18.024 ************************************ 00:07:18.024 13:06:14 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:18.024 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:18.024 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:18.024 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.024 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.024 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:18.024 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:18.024 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:18.024 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.024 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.024 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.024 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.024 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.024 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:18.024 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:18.024 [2024-07-15 13:06:14.488478] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:18.024 [2024-07-15 13:06:14.488578] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76442 ] 00:07:18.024 [2024-07-15 13:06:14.629188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.024 [2024-07-15 13:06:14.740480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.024 [2024-07-15 13:06:14.740583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.024 [2024-07-15 13:06:14.740729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.024 [2024-07-15 13:06:14.740730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:18.282 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.283 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.283 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.283 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:18.283 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.283 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.283 13:06:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.657 00:07:19.657 real 0m1.536s 00:07:19.657 user 0m4.730s 00:07:19.657 sys 0m0.134s 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.657 13:06:15 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:19.657 ************************************ 00:07:19.657 END TEST accel_decomp_mcore 00:07:19.657 ************************************ 00:07:19.657 13:06:16 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:19.657 13:06:16 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:19.657 13:06:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.657 13:06:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.657 ************************************ 00:07:19.657 START TEST accel_decomp_full_mcore 00:07:19.657 ************************************ 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:19.657 [2024-07-15 13:06:16.069731] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:19.657 [2024-07-15 13:06:16.069829] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76484 ] 00:07:19.657 [2024-07-15 13:06:16.205116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.657 [2024-07-15 13:06:16.300798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.657 [2024-07-15 13:06:16.300950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.657 [2024-07-15 13:06:16.301052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.657 [2024-07-15 13:06:16.301271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.657 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.658 13:06:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.033 00:07:21.033 real 0m1.492s 00:07:21.033 user 0m0.015s 00:07:21.033 sys 0m0.004s 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.033 ************************************ 00:07:21.033 END TEST accel_decomp_full_mcore 00:07:21.033 ************************************ 00:07:21.033 13:06:17 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:21.033 13:06:17 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:21.033 13:06:17 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:21.033 13:06:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.033 13:06:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.033 ************************************ 00:07:21.033 START TEST accel_decomp_mthread 00:07:21.033 ************************************ 00:07:21.033 13:06:17 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:21.033 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:21.033 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:21.033 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.033 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.033 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:21.033 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:21.033 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:21.033 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.033 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.033 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.033 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.033 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.033 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:21.033 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:21.033 [2024-07-15 13:06:17.611075] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:21.033 [2024-07-15 13:06:17.611197] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76517 ] 00:07:21.033 [2024-07-15 13:06:17.749050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.292 [2024-07-15 13:06:17.822771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.293 13:06:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.351 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.352 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.352 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.352 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.352 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.352 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.352 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:22.352 13:06:19 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.352 00:07:22.352 real 0m1.459s 00:07:22.352 user 0m1.249s 00:07:22.352 sys 0m0.114s 00:07:22.352 13:06:19 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.352 13:06:19 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:22.352 ************************************ 00:07:22.352 END TEST accel_decomp_mthread 00:07:22.352 ************************************ 00:07:22.352 13:06:19 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:22.352 13:06:19 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:22.352 13:06:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.352 13:06:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.610 ************************************ 00:07:22.610 START TEST accel_decomp_full_mthread 00:07:22.610 ************************************ 00:07:22.610 13:06:19 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:22.610 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:22.610 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:22.610 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.610 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.610 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:22.610 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:22.610 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:22.610 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.610 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.610 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.610 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.610 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.610 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:22.610 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:22.610 [2024-07-15 13:06:19.113551] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:22.610 [2024-07-15 13:06:19.113671] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76552 ] 00:07:22.610 [2024-07-15 13:06:19.256227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.610 [2024-07-15 13:06:19.343768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.868 13:06:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.243 00:07:24.243 real 0m1.512s 00:07:24.243 user 0m1.296s 00:07:24.243 sys 0m0.120s 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:24.243 13:06:20 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:24.243 ************************************ 00:07:24.243 END TEST accel_decomp_full_mthread 00:07:24.243 ************************************ 00:07:24.243 13:06:20 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:24.243 13:06:20 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:24.243 13:06:20 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:24.243 13:06:20 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.243 13:06:20 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:24.243 13:06:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.243 13:06:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.243 13:06:20 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.243 13:06:20 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.243 13:06:20 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.243 13:06:20 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.243 13:06:20 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:24.243 13:06:20 accel -- accel/accel.sh@41 -- # jq -r . 00:07:24.243 ************************************ 00:07:24.243 START TEST accel_dif_functional_tests 00:07:24.243 ************************************ 00:07:24.243 13:06:20 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:24.244 [2024-07-15 13:06:20.712433] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:24.244 [2024-07-15 13:06:20.712534] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76587 ] 00:07:24.244 [2024-07-15 13:06:20.855275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:24.244 [2024-07-15 13:06:20.937385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.244 [2024-07-15 13:06:20.937526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.244 [2024-07-15 13:06:20.937536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.502 00:07:24.502 00:07:24.502 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.502 http://cunit.sourceforge.net/ 00:07:24.502 00:07:24.502 00:07:24.502 Suite: accel_dif 00:07:24.502 Test: verify: DIF generated, GUARD check ...passed 00:07:24.502 Test: verify: DIF generated, APPTAG check ...passed 00:07:24.502 Test: verify: DIF generated, REFTAG check ...passed 00:07:24.502 Test: verify: DIF not generated, GUARD check ...passed 00:07:24.502 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 13:06:21.037340] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:24.502 [2024-07-15 13:06:21.037482] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:24.502 passed 00:07:24.502 Test: verify: DIF not generated, REFTAG check ...passed 00:07:24.502 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:24.502 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 13:06:21.037618] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:24.502 passed 00:07:24.502 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-15 13:06:21.037697] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:24.502 passed 00:07:24.502 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:24.502 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:24.502 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 13:06:21.038087] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:24.502 passed 00:07:24.502 Test: verify copy: DIF generated, GUARD check ...passed 00:07:24.502 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:24.502 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:24.502 Test: verify copy: DIF not generated, GUARD check ...passed 00:07:24.502 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 13:06:21.038580] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:24.502 [2024-07-15 13:06:21.038680] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:24.502 passed 00:07:24.502 Test: verify copy: DIF not generated, REFTAG check ...passed 00:07:24.502 Test: generate copy: DIF generated, GUARD check ...[2024-07-15 13:06:21.038721] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:24.502 passed 00:07:24.502 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:24.502 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:24.502 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:24.502 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:24.502 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:24.502 Test: generate copy: iovecs-len validate ...passed 00:07:24.502 Test: generate copy: buffer alignment validate ...passed 00:07:24.502 00:07:24.502 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.502 suites 1 1 n/a 0 0 00:07:24.502 tests 26 26 26 0 0 00:07:24.502 asserts 115 115 115 0 n/a 00:07:24.502 00:07:24.502 Elapsed time = 0.007 seconds 00:07:24.502 [2024-07-15 13:06:21.039357] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:24.760 ************************************ 00:07:24.760 END TEST accel_dif_functional_tests 00:07:24.760 ************************************ 00:07:24.760 00:07:24.760 real 0m0.587s 00:07:24.760 user 0m0.786s 00:07:24.760 sys 0m0.167s 00:07:24.760 13:06:21 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:24.760 13:06:21 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:24.760 ************************************ 00:07:24.760 END TEST accel 00:07:24.760 ************************************ 00:07:24.760 00:07:24.760 real 0m34.410s 00:07:24.761 user 0m35.994s 00:07:24.761 sys 0m4.123s 00:07:24.761 13:06:21 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:24.761 13:06:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.761 13:06:21 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:24.761 13:06:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:24.761 13:06:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.761 13:06:21 -- common/autotest_common.sh@10 -- # set +x 00:07:24.761 ************************************ 00:07:24.761 START TEST accel_rpc 00:07:24.761 ************************************ 00:07:24.761 13:06:21 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:24.761 * Looking for test storage... 00:07:24.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:24.761 13:06:21 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:24.761 13:06:21 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=76657 00:07:24.761 13:06:21 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:24.761 13:06:21 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 76657 00:07:24.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.761 13:06:21 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 76657 ']' 00:07:24.761 13:06:21 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.761 13:06:21 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:24.761 13:06:21 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.761 13:06:21 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:24.761 13:06:21 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.761 [2024-07-15 13:06:21.455922] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:24.761 [2024-07-15 13:06:21.456698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76657 ] 00:07:25.019 [2024-07-15 13:06:21.591120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.019 [2024-07-15 13:06:21.668279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.952 13:06:22 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:25.952 13:06:22 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:25.952 13:06:22 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:25.952 13:06:22 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:25.952 13:06:22 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:25.952 13:06:22 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:25.953 13:06:22 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:25.953 13:06:22 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:25.953 13:06:22 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.953 13:06:22 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.953 ************************************ 00:07:25.953 START TEST accel_assign_opcode 00:07:25.953 ************************************ 00:07:25.953 13:06:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:25.953 13:06:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:25.953 13:06:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.953 13:06:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:25.953 [2024-07-15 13:06:22.476953] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:25.953 13:06:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.953 13:06:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:25.953 13:06:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.953 13:06:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:25.953 [2024-07-15 13:06:22.484937] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:25.953 13:06:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.953 13:06:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:25.953 13:06:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.953 13:06:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.210 13:06:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.210 13:06:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:26.210 13:06:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:26.210 13:06:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.210 13:06:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.210 13:06:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:26.210 13:06:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.210 software 00:07:26.210 00:07:26.210 real 0m0.293s 00:07:26.210 user 0m0.051s 00:07:26.210 sys 0m0.010s 00:07:26.210 ************************************ 00:07:26.210 END TEST accel_assign_opcode 00:07:26.210 ************************************ 00:07:26.210 13:06:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.210 13:06:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.210 13:06:22 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 76657 00:07:26.210 13:06:22 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 76657 ']' 00:07:26.210 13:06:22 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 76657 00:07:26.210 13:06:22 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:26.210 13:06:22 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:26.210 13:06:22 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76657 00:07:26.210 killing process with pid 76657 00:07:26.210 13:06:22 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:26.210 13:06:22 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:26.210 13:06:22 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76657' 00:07:26.210 13:06:22 accel_rpc -- common/autotest_common.sh@965 -- # kill 76657 00:07:26.210 13:06:22 accel_rpc -- common/autotest_common.sh@970 -- # wait 76657 00:07:26.776 ************************************ 00:07:26.776 END TEST accel_rpc 00:07:26.776 ************************************ 00:07:26.776 00:07:26.776 real 0m1.881s 00:07:26.776 user 0m2.003s 00:07:26.776 sys 0m0.441s 00:07:26.776 13:06:23 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.776 13:06:23 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.776 13:06:23 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:26.776 13:06:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:26.776 13:06:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.776 13:06:23 -- common/autotest_common.sh@10 -- # set +x 00:07:26.776 ************************************ 00:07:26.776 START TEST app_cmdline 00:07:26.776 ************************************ 00:07:26.776 13:06:23 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:26.776 * Looking for test storage... 00:07:26.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:26.776 13:06:23 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:26.776 13:06:23 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=76763 00:07:26.776 13:06:23 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:26.776 13:06:23 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 76763 00:07:26.776 13:06:23 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 76763 ']' 00:07:26.776 13:06:23 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.776 13:06:23 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:26.776 13:06:23 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.776 13:06:23 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:26.776 13:06:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:26.776 [2024-07-15 13:06:23.387754] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:26.776 [2024-07-15 13:06:23.388315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76763 ] 00:07:27.034 [2024-07-15 13:06:23.531240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.034 [2024-07-15 13:06:23.631431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.968 13:06:24 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:27.968 13:06:24 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:27.968 13:06:24 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:27.968 { 00:07:27.968 "fields": { 00:07:27.968 "commit": "5fa2f5086", 00:07:27.968 "major": 24, 00:07:27.968 "minor": 5, 00:07:27.968 "patch": 1, 00:07:27.968 "suffix": "-pre" 00:07:27.968 }, 00:07:27.968 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086" 00:07:27.968 } 00:07:27.968 13:06:24 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:27.968 13:06:24 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:27.968 13:06:24 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:27.968 13:06:24 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:27.968 13:06:24 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:27.968 13:06:24 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.968 13:06:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:27.968 13:06:24 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:27.968 13:06:24 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:27.968 13:06:24 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.226 13:06:24 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:28.226 13:06:24 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:28.226 13:06:24 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.226 13:06:24 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:28.226 13:06:24 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.226 13:06:24 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:28.226 13:06:24 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.226 13:06:24 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:28.226 13:06:24 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.226 13:06:24 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:28.226 13:06:24 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.226 13:06:24 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:28.226 13:06:24 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:28.226 13:06:24 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.226 2024/07/15 13:06:24 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:28.226 request: 00:07:28.226 { 00:07:28.226 "method": "env_dpdk_get_mem_stats", 00:07:28.226 "params": {} 00:07:28.226 } 00:07:28.226 Got JSON-RPC error response 00:07:28.226 GoRPCClient: error on JSON-RPC call 00:07:28.484 13:06:24 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:28.484 13:06:24 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.484 13:06:24 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:28.484 13:06:24 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.484 13:06:24 app_cmdline -- app/cmdline.sh@1 -- # killprocess 76763 00:07:28.484 13:06:24 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 76763 ']' 00:07:28.484 13:06:24 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 76763 00:07:28.484 13:06:24 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:28.484 13:06:24 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:28.484 13:06:24 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76763 00:07:28.484 13:06:24 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:28.484 13:06:24 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:28.484 13:06:24 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76763' 00:07:28.484 killing process with pid 76763 00:07:28.484 13:06:24 app_cmdline -- common/autotest_common.sh@965 -- # kill 76763 00:07:28.484 13:06:24 app_cmdline -- common/autotest_common.sh@970 -- # wait 76763 00:07:28.743 00:07:28.743 real 0m2.121s 00:07:28.743 user 0m2.645s 00:07:28.743 sys 0m0.502s 00:07:28.743 13:06:25 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.743 13:06:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.743 ************************************ 00:07:28.743 END TEST app_cmdline 00:07:28.743 ************************************ 00:07:28.743 13:06:25 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:28.743 13:06:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:28.743 13:06:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.743 13:06:25 -- common/autotest_common.sh@10 -- # set +x 00:07:28.743 ************************************ 00:07:28.743 START TEST version 00:07:28.743 ************************************ 00:07:28.743 13:06:25 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:29.002 * Looking for test storage... 00:07:29.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:29.002 13:06:25 version -- app/version.sh@17 -- # get_header_version major 00:07:29.002 13:06:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:29.002 13:06:25 version -- app/version.sh@14 -- # cut -f2 00:07:29.002 13:06:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.002 13:06:25 version -- app/version.sh@17 -- # major=24 00:07:29.002 13:06:25 version -- app/version.sh@18 -- # get_header_version minor 00:07:29.002 13:06:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:29.002 13:06:25 version -- app/version.sh@14 -- # cut -f2 00:07:29.002 13:06:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.002 13:06:25 version -- app/version.sh@18 -- # minor=5 00:07:29.002 13:06:25 version -- app/version.sh@19 -- # get_header_version patch 00:07:29.002 13:06:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:29.002 13:06:25 version -- app/version.sh@14 -- # cut -f2 00:07:29.002 13:06:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.002 13:06:25 version -- app/version.sh@19 -- # patch=1 00:07:29.002 13:06:25 version -- app/version.sh@20 -- # get_header_version suffix 00:07:29.002 13:06:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:29.002 13:06:25 version -- app/version.sh@14 -- # cut -f2 00:07:29.002 13:06:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.002 13:06:25 version -- app/version.sh@20 -- # suffix=-pre 00:07:29.002 13:06:25 version -- app/version.sh@22 -- # version=24.5 00:07:29.002 13:06:25 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:29.002 13:06:25 version -- app/version.sh@25 -- # version=24.5.1 00:07:29.002 13:06:25 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:29.002 13:06:25 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:29.002 13:06:25 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:29.002 13:06:25 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:29.002 13:06:25 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:29.002 00:07:29.002 real 0m0.147s 00:07:29.002 user 0m0.085s 00:07:29.002 sys 0m0.089s 00:07:29.002 13:06:25 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.002 13:06:25 version -- common/autotest_common.sh@10 -- # set +x 00:07:29.002 ************************************ 00:07:29.002 END TEST version 00:07:29.002 ************************************ 00:07:29.002 13:06:25 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:29.002 13:06:25 -- spdk/autotest.sh@198 -- # uname -s 00:07:29.002 13:06:25 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:29.002 13:06:25 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:29.002 13:06:25 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:29.002 13:06:25 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:29.002 13:06:25 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:29.002 13:06:25 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:29.002 13:06:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:29.002 13:06:25 -- common/autotest_common.sh@10 -- # set +x 00:07:29.002 13:06:25 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:29.002 13:06:25 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:29.002 13:06:25 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:29.002 13:06:25 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:29.002 13:06:25 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:29.002 13:06:25 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:29.002 13:06:25 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:29.002 13:06:25 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:29.002 13:06:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.002 13:06:25 -- common/autotest_common.sh@10 -- # set +x 00:07:29.002 ************************************ 00:07:29.002 START TEST nvmf_tcp 00:07:29.002 ************************************ 00:07:29.002 13:06:25 nvmf_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:29.002 * Looking for test storage... 00:07:29.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.262 13:06:25 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.262 13:06:25 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.262 13:06:25 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.262 13:06:25 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.262 13:06:25 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.262 13:06:25 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.262 13:06:25 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:29.262 13:06:25 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:29.262 13:06:25 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:29.262 13:06:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:29.262 13:06:25 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:29.262 13:06:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:29.262 13:06:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.262 13:06:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.262 ************************************ 00:07:29.262 START TEST nvmf_example 00:07:29.262 ************************************ 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:29.262 * Looking for test storage... 00:07:29.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.262 13:06:25 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:29.263 Cannot find device "nvmf_init_br" 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:29.263 Cannot find device "nvmf_tgt_br" 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:29.263 Cannot find device "nvmf_tgt_br2" 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:29.263 Cannot find device "nvmf_init_br" 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:29.263 Cannot find device "nvmf_tgt_br" 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:29.263 Cannot find device "nvmf_tgt_br2" 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:29.263 Cannot find device "nvmf_br" 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:29.263 Cannot find device "nvmf_init_if" 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:29.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:29.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:29.263 13:06:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:29.522 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:29.523 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:29.523 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:29.523 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:29.523 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:29.523 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:29.523 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:29.523 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:29.523 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:29.523 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:29.523 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:29.523 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:29.523 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:29.523 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:29.523 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:29.523 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:29.523 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:29.523 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:29.523 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:29.523 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:29.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:07:29.782 00:07:29.782 --- 10.0.0.2 ping statistics --- 00:07:29.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.782 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:29.782 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:29.782 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:07:29.782 00:07:29.782 --- 10.0.0.3 ping statistics --- 00:07:29.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.782 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:29.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:07:29.782 00:07:29.782 --- 10.0.0.1 ping statistics --- 00:07:29.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.782 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=77114 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 77114 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 77114 ']' 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:29.782 13:06:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:30.718 13:06:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:30.719 13:06:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:30.719 13:06:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:30.719 13:06:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:30.719 13:06:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:30.719 13:06:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:30.719 13:06:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.719 13:06:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:30.719 13:06:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.719 13:06:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:30.719 13:06:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.719 13:06:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:30.978 13:06:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.978 13:06:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:30.978 13:06:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:30.978 13:06:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.978 13:06:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:30.978 13:06:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.978 13:06:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:30.978 13:06:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:30.978 13:06:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.978 13:06:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:30.978 13:06:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.978 13:06:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:30.978 13:06:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.978 13:06:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:30.978 13:06:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.978 13:06:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:30.978 13:06:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:43.177 Initializing NVMe Controllers 00:07:43.177 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:43.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:43.177 Initialization complete. Launching workers. 00:07:43.177 ======================================================== 00:07:43.177 Latency(us) 00:07:43.177 Device Information : IOPS MiB/s Average min max 00:07:43.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14797.90 57.80 4324.72 818.29 23121.17 00:07:43.177 ======================================================== 00:07:43.177 Total : 14797.90 57.80 4324.72 818.29 23121.17 00:07:43.177 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:43.177 rmmod nvme_tcp 00:07:43.177 rmmod nvme_fabrics 00:07:43.177 rmmod nvme_keyring 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 77114 ']' 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 77114 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 77114 ']' 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 77114 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77114 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77114' 00:07:43.177 killing process with pid 77114 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 77114 00:07:43.177 13:06:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 77114 00:07:43.177 nvmf threads initialize successfully 00:07:43.177 bdev subsystem init successfully 00:07:43.177 created a nvmf target service 00:07:43.177 create targets's poll groups done 00:07:43.177 all subsystems of target started 00:07:43.177 nvmf target is running 00:07:43.177 all subsystems of target stopped 00:07:43.177 destroy targets's poll groups done 00:07:43.177 destroyed the nvmf target service 00:07:43.177 bdev subsystem finish successfully 00:07:43.177 nvmf threads destroy successfully 00:07:43.177 13:06:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:43.177 13:06:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:43.177 13:06:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:43.177 13:06:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:43.177 13:06:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:43.177 13:06:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.177 13:06:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.177 13:06:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.177 13:06:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:43.177 13:06:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:43.177 13:06:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:43.177 13:06:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:43.177 00:07:43.177 real 0m12.383s 00:07:43.177 user 0m44.408s 00:07:43.177 sys 0m2.082s 00:07:43.177 13:06:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.177 13:06:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:43.177 ************************************ 00:07:43.177 END TEST nvmf_example 00:07:43.177 ************************************ 00:07:43.177 13:06:38 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:43.177 13:06:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:43.177 13:06:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.177 13:06:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:43.177 ************************************ 00:07:43.177 START TEST nvmf_filesystem 00:07:43.177 ************************************ 00:07:43.177 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:43.177 * Looking for test storage... 00:07:43.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:43.177 13:06:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:43.177 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:43.178 13:06:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:43.178 #define SPDK_CONFIG_H 00:07:43.178 #define SPDK_CONFIG_APPS 1 00:07:43.178 #define SPDK_CONFIG_ARCH native 00:07:43.178 #undef SPDK_CONFIG_ASAN 00:07:43.178 #define SPDK_CONFIG_AVAHI 1 00:07:43.178 #undef SPDK_CONFIG_CET 00:07:43.178 #define SPDK_CONFIG_COVERAGE 1 00:07:43.178 #define SPDK_CONFIG_CROSS_PREFIX 00:07:43.178 #undef SPDK_CONFIG_CRYPTO 00:07:43.178 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:43.178 #undef SPDK_CONFIG_CUSTOMOCF 00:07:43.178 #undef SPDK_CONFIG_DAOS 00:07:43.178 #define SPDK_CONFIG_DAOS_DIR 00:07:43.178 #define SPDK_CONFIG_DEBUG 1 00:07:43.178 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:43.178 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:07:43.178 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:07:43.178 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:07:43.178 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:43.178 #undef SPDK_CONFIG_DPDK_UADK 00:07:43.178 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:43.178 #define SPDK_CONFIG_EXAMPLES 1 00:07:43.178 #undef SPDK_CONFIG_FC 00:07:43.178 #define SPDK_CONFIG_FC_PATH 00:07:43.178 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:43.178 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:43.178 #undef SPDK_CONFIG_FUSE 00:07:43.178 #undef SPDK_CONFIG_FUZZER 00:07:43.178 #define SPDK_CONFIG_FUZZER_LIB 00:07:43.178 #define SPDK_CONFIG_GOLANG 1 00:07:43.178 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:43.178 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:43.178 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:43.178 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:43.178 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:43.178 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:43.178 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:43.178 #define SPDK_CONFIG_IDXD 1 00:07:43.178 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:43.178 #undef SPDK_CONFIG_IPSEC_MB 00:07:43.179 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:43.179 #define SPDK_CONFIG_ISAL 1 00:07:43.179 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:43.179 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:43.179 #define SPDK_CONFIG_LIBDIR 00:07:43.179 #undef SPDK_CONFIG_LTO 00:07:43.179 #define SPDK_CONFIG_MAX_LCORES 00:07:43.179 #define SPDK_CONFIG_NVME_CUSE 1 00:07:43.179 #undef SPDK_CONFIG_OCF 00:07:43.179 #define SPDK_CONFIG_OCF_PATH 00:07:43.179 #define SPDK_CONFIG_OPENSSL_PATH 00:07:43.179 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:43.179 #define SPDK_CONFIG_PGO_DIR 00:07:43.179 #undef SPDK_CONFIG_PGO_USE 00:07:43.179 #define SPDK_CONFIG_PREFIX /usr/local 00:07:43.179 #undef SPDK_CONFIG_RAID5F 00:07:43.179 #undef SPDK_CONFIG_RBD 00:07:43.179 #define SPDK_CONFIG_RDMA 1 00:07:43.179 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:43.179 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:43.179 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:43.179 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:43.179 #define SPDK_CONFIG_SHARED 1 00:07:43.179 #undef SPDK_CONFIG_SMA 00:07:43.179 #define SPDK_CONFIG_TESTS 1 00:07:43.179 #undef SPDK_CONFIG_TSAN 00:07:43.179 #define SPDK_CONFIG_UBLK 1 00:07:43.179 #define SPDK_CONFIG_UBSAN 1 00:07:43.179 #undef SPDK_CONFIG_UNIT_TESTS 00:07:43.179 #undef SPDK_CONFIG_URING 00:07:43.179 #define SPDK_CONFIG_URING_PATH 00:07:43.179 #undef SPDK_CONFIG_URING_ZNS 00:07:43.179 #define SPDK_CONFIG_USDT 1 00:07:43.179 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:43.179 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:43.179 #undef SPDK_CONFIG_VFIO_USER 00:07:43.179 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:43.179 #define SPDK_CONFIG_VHOST 1 00:07:43.179 #define SPDK_CONFIG_VIRTIO 1 00:07:43.179 #undef SPDK_CONFIG_VTUNE 00:07:43.179 #define SPDK_CONFIG_VTUNE_DIR 00:07:43.179 #define SPDK_CONFIG_WERROR 1 00:07:43.179 #define SPDK_CONFIG_WPDK_DIR 00:07:43.179 #undef SPDK_CONFIG_XNVME 00:07:43.179 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:43.179 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /home/vagrant/spdk_repo/dpdk/build 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v22.11.4 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 1 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 1 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 1 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j10 00:07:43.180 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 77362 ]] 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 77362 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.pJ87XV 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.pJ87XV/tests/target /tmp/spdk.pJ87XV 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=devtmpfs 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=4194304 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=4194304 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6264516608 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6267891712 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=2494353408 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=2507157504 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=12804096 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda5 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=btrfs 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=13196718080 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=20314062848 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5848240128 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda5 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=btrfs 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=13196718080 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=20314062848 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5848240128 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda2 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext4 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=843546624 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=1012768768 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=100016128 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6267752448 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6267891712 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=139264 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda3 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=vfat 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=92499968 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=104607744 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=12107776 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=1253572608 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=1253576704 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=fuse.sshfs 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=91679555584 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=105088212992 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8023224320 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:43.181 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:43.182 * Looking for test storage... 00:07:43.182 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:43.182 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:43.182 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:43.182 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:43.182 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/home 00:07:43.182 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=13196718080 00:07:43.182 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:43.182 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:43.182 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ btrfs == tmpfs ]] 00:07:43.182 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ btrfs == ramfs ]] 00:07:43.182 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ /home == / ]] 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:43.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:43.183 Cannot find device "nvmf_tgt_br" 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:43.183 Cannot find device "nvmf_tgt_br2" 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:43.183 Cannot find device "nvmf_tgt_br" 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:43.183 Cannot find device "nvmf_tgt_br2" 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:43.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:43.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:43.183 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:43.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:07:43.184 00:07:43.184 --- 10.0.0.2 ping statistics --- 00:07:43.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.184 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:43.184 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:43.184 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:07:43.184 00:07:43.184 --- 10.0.0.3 ping statistics --- 00:07:43.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.184 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:43.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:07:43.184 00:07:43.184 --- 10.0.0.1 ping statistics --- 00:07:43.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.184 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.184 ************************************ 00:07:43.184 START TEST nvmf_filesystem_no_in_capsule 00:07:43.184 ************************************ 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=77521 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 77521 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 77521 ']' 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:43.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:43.184 13:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.184 [2024-07-15 13:06:38.862166] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:43.184 [2024-07-15 13:06:38.862272] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.184 [2024-07-15 13:06:39.019278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.184 [2024-07-15 13:06:39.134723] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.184 [2024-07-15 13:06:39.134787] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.184 [2024-07-15 13:06:39.134802] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.184 [2024-07-15 13:06:39.134814] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.184 [2024-07-15 13:06:39.134823] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.184 [2024-07-15 13:06:39.134985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.184 [2024-07-15 13:06:39.135134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.184 [2024-07-15 13:06:39.135692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.184 [2024-07-15 13:06:39.135722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.184 13:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:43.184 13:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:43.442 13:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:43.442 13:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:43.442 13:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.442 13:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.442 13:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:43.442 13:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:43.442 13:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.442 13:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.442 [2024-07-15 13:06:39.957765] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.442 13:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.442 13:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:43.442 13:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.442 13:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.442 Malloc1 00:07:43.442 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.442 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:43.442 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.442 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.442 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.442 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:43.442 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.442 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.442 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.442 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.442 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.442 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.442 [2024-07-15 13:06:40.163084] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.442 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.443 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:43.443 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:43.443 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:43.443 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:43.443 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:43.443 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:43.443 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.443 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.700 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.700 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:43.700 { 00:07:43.700 "aliases": [ 00:07:43.700 "0115d7ba-b247-4244-b8b3-b5b849e535ae" 00:07:43.700 ], 00:07:43.700 "assigned_rate_limits": { 00:07:43.700 "r_mbytes_per_sec": 0, 00:07:43.700 "rw_ios_per_sec": 0, 00:07:43.700 "rw_mbytes_per_sec": 0, 00:07:43.700 "w_mbytes_per_sec": 0 00:07:43.700 }, 00:07:43.700 "block_size": 512, 00:07:43.700 "claim_type": "exclusive_write", 00:07:43.700 "claimed": true, 00:07:43.700 "driver_specific": {}, 00:07:43.700 "memory_domains": [ 00:07:43.700 { 00:07:43.700 "dma_device_id": "system", 00:07:43.700 "dma_device_type": 1 00:07:43.700 }, 00:07:43.700 { 00:07:43.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.700 "dma_device_type": 2 00:07:43.700 } 00:07:43.700 ], 00:07:43.700 "name": "Malloc1", 00:07:43.700 "num_blocks": 1048576, 00:07:43.700 "product_name": "Malloc disk", 00:07:43.700 "supported_io_types": { 00:07:43.700 "abort": true, 00:07:43.700 "compare": false, 00:07:43.700 "compare_and_write": false, 00:07:43.700 "flush": true, 00:07:43.700 "nvme_admin": false, 00:07:43.700 "nvme_io": false, 00:07:43.700 "read": true, 00:07:43.700 "reset": true, 00:07:43.700 "unmap": true, 00:07:43.700 "write": true, 00:07:43.700 "write_zeroes": true 00:07:43.700 }, 00:07:43.700 "uuid": "0115d7ba-b247-4244-b8b3-b5b849e535ae", 00:07:43.700 "zoned": false 00:07:43.700 } 00:07:43.700 ]' 00:07:43.700 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:43.700 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:43.700 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:43.700 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:43.700 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:43.700 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:43.700 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:43.700 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:43.958 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:43.958 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:43.958 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:43.958 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:43.958 13:06:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:45.857 13:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:45.857 13:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:45.857 13:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:45.857 13:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:45.857 13:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:45.857 13:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:45.857 13:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:45.857 13:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:45.857 13:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:45.857 13:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:45.857 13:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:45.857 13:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:45.857 13:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:45.857 13:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:45.857 13:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:45.857 13:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:45.857 13:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:45.857 13:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:46.115 13:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:47.052 13:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:47.052 13:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:47.052 13:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:47.052 13:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.052 13:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.052 ************************************ 00:07:47.052 START TEST filesystem_ext4 00:07:47.052 ************************************ 00:07:47.052 13:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:47.052 13:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:47.052 13:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:47.052 13:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:47.052 13:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:47.052 13:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:47.052 13:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:47.052 13:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:47.052 13:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:47.052 13:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:47.052 13:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:47.052 mke2fs 1.46.5 (30-Dec-2021) 00:07:47.052 Discarding device blocks: 0/522240 done 00:07:47.052 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:47.052 Filesystem UUID: 866b04c8-9552-472a-a8af-e7eb9fc1098e 00:07:47.052 Superblock backups stored on blocks: 00:07:47.052 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:47.052 00:07:47.052 Allocating group tables: 0/64 done 00:07:47.052 Writing inode tables: 0/64 done 00:07:47.310 Creating journal (8192 blocks): done 00:07:47.310 Writing superblocks and filesystem accounting information: 0/64 done 00:07:47.310 00:07:47.310 13:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:47.310 13:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:47.310 13:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:47.310 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 77521 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:47.568 00:07:47.568 real 0m0.462s 00:07:47.568 user 0m0.026s 00:07:47.568 sys 0m0.061s 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:47.568 ************************************ 00:07:47.568 END TEST filesystem_ext4 00:07:47.568 ************************************ 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.568 ************************************ 00:07:47.568 START TEST filesystem_btrfs 00:07:47.568 ************************************ 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:47.568 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:47.828 btrfs-progs v6.6.2 00:07:47.828 See https://btrfs.readthedocs.io for more information. 00:07:47.828 00:07:47.828 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:47.828 NOTE: several default settings have changed in version 5.15, please make sure 00:07:47.828 this does not affect your deployments: 00:07:47.828 - DUP for metadata (-m dup) 00:07:47.828 - enabled no-holes (-O no-holes) 00:07:47.828 - enabled free-space-tree (-R free-space-tree) 00:07:47.828 00:07:47.828 Label: (null) 00:07:47.828 UUID: 9e580e5b-5608-4efa-9064-3a202e756667 00:07:47.828 Node size: 16384 00:07:47.828 Sector size: 4096 00:07:47.828 Filesystem size: 510.00MiB 00:07:47.828 Block group profiles: 00:07:47.828 Data: single 8.00MiB 00:07:47.828 Metadata: DUP 32.00MiB 00:07:47.828 System: DUP 8.00MiB 00:07:47.828 SSD detected: yes 00:07:47.828 Zoned device: no 00:07:47.828 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:47.828 Runtime features: free-space-tree 00:07:47.828 Checksum: crc32c 00:07:47.828 Number of devices: 1 00:07:47.828 Devices: 00:07:47.828 ID SIZE PATH 00:07:47.828 1 510.00MiB /dev/nvme0n1p1 00:07:47.828 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 77521 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:47.828 ************************************ 00:07:47.828 END TEST filesystem_btrfs 00:07:47.828 ************************************ 00:07:47.828 00:07:47.828 real 0m0.212s 00:07:47.828 user 0m0.025s 00:07:47.828 sys 0m0.055s 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.828 ************************************ 00:07:47.828 START TEST filesystem_xfs 00:07:47.828 ************************************ 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:47.828 13:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:47.828 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:47.828 = sectsz=512 attr=2, projid32bit=1 00:07:47.828 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:47.828 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:47.828 data = bsize=4096 blocks=130560, imaxpct=25 00:07:47.828 = sunit=0 swidth=0 blks 00:07:47.828 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:47.828 log =internal log bsize=4096 blocks=16384, version=2 00:07:47.828 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:47.828 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:48.763 Discarding blocks...Done. 00:07:48.763 13:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:48.763 13:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:51.289 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:51.289 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:51.289 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:51.289 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:51.289 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:51.289 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:51.289 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 77521 00:07:51.289 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:51.289 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:51.289 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:51.289 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:51.289 ************************************ 00:07:51.289 END TEST filesystem_xfs 00:07:51.289 ************************************ 00:07:51.289 00:07:51.289 real 0m3.170s 00:07:51.289 user 0m0.028s 00:07:51.289 sys 0m0.052s 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:51.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 77521 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 77521 ']' 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 77521 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77521 00:07:51.290 killing process with pid 77521 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77521' 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 77521 00:07:51.290 13:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 77521 00:07:51.854 13:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:51.854 00:07:51.854 real 0m9.512s 00:07:51.854 user 0m35.829s 00:07:51.854 sys 0m1.686s 00:07:51.854 ************************************ 00:07:51.854 END TEST nvmf_filesystem_no_in_capsule 00:07:51.854 ************************************ 00:07:51.854 13:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:51.854 13:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.854 13:06:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:51.854 13:06:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:51.854 13:06:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:51.854 13:06:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.854 ************************************ 00:07:51.854 START TEST nvmf_filesystem_in_capsule 00:07:51.854 ************************************ 00:07:51.855 13:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:51.855 13:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:51.855 13:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:51.855 13:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:51.855 13:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:51.855 13:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.855 13:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=77832 00:07:51.855 13:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 77832 00:07:51.855 13:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:51.855 13:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 77832 ']' 00:07:51.855 13:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.855 13:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:51.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.855 13:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.855 13:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:51.855 13:06:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.855 [2024-07-15 13:06:48.421747] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:51.855 [2024-07-15 13:06:48.421847] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.855 [2024-07-15 13:06:48.555987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.112 [2024-07-15 13:06:48.656906] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.112 [2024-07-15 13:06:48.656974] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.112 [2024-07-15 13:06:48.656987] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.112 [2024-07-15 13:06:48.656995] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.112 [2024-07-15 13:06:48.657003] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.112 [2024-07-15 13:06:48.657156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.112 [2024-07-15 13:06:48.657366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.112 [2024-07-15 13:06:48.657983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.112 [2024-07-15 13:06:48.657991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.044 [2024-07-15 13:06:49.479487] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.044 Malloc1 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.044 [2024-07-15 13:06:49.664619] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:53.044 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:53.045 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:53.045 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:53.045 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.045 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.045 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.045 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:53.045 { 00:07:53.045 "aliases": [ 00:07:53.045 "d62eb516-026b-4ec6-9cad-208a57011308" 00:07:53.045 ], 00:07:53.045 "assigned_rate_limits": { 00:07:53.045 "r_mbytes_per_sec": 0, 00:07:53.045 "rw_ios_per_sec": 0, 00:07:53.045 "rw_mbytes_per_sec": 0, 00:07:53.045 "w_mbytes_per_sec": 0 00:07:53.045 }, 00:07:53.045 "block_size": 512, 00:07:53.045 "claim_type": "exclusive_write", 00:07:53.045 "claimed": true, 00:07:53.045 "driver_specific": {}, 00:07:53.045 "memory_domains": [ 00:07:53.045 { 00:07:53.045 "dma_device_id": "system", 00:07:53.045 "dma_device_type": 1 00:07:53.045 }, 00:07:53.045 { 00:07:53.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.045 "dma_device_type": 2 00:07:53.045 } 00:07:53.045 ], 00:07:53.045 "name": "Malloc1", 00:07:53.045 "num_blocks": 1048576, 00:07:53.045 "product_name": "Malloc disk", 00:07:53.045 "supported_io_types": { 00:07:53.045 "abort": true, 00:07:53.045 "compare": false, 00:07:53.045 "compare_and_write": false, 00:07:53.045 "flush": true, 00:07:53.045 "nvme_admin": false, 00:07:53.045 "nvme_io": false, 00:07:53.045 "read": true, 00:07:53.045 "reset": true, 00:07:53.045 "unmap": true, 00:07:53.045 "write": true, 00:07:53.045 "write_zeroes": true 00:07:53.045 }, 00:07:53.045 "uuid": "d62eb516-026b-4ec6-9cad-208a57011308", 00:07:53.045 "zoned": false 00:07:53.045 } 00:07:53.045 ]' 00:07:53.045 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:53.045 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:53.045 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:53.045 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:53.045 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:53.045 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:53.045 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:53.045 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:53.302 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:53.302 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:53.302 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:53.302 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:53.302 13:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:55.208 13:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:55.208 13:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:55.208 13:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:55.466 13:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:55.466 13:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:55.466 13:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:55.466 13:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:55.466 13:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:55.466 13:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:55.466 13:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:55.466 13:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:55.466 13:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:55.466 13:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:55.466 13:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:55.466 13:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:55.466 13:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:55.466 13:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:55.466 13:06:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:55.466 13:06:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:56.398 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:56.398 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:56.398 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:56.398 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:56.398 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.398 ************************************ 00:07:56.398 START TEST filesystem_in_capsule_ext4 00:07:56.398 ************************************ 00:07:56.398 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:56.398 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:56.398 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:56.398 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:56.398 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:56.398 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:56.398 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:56.398 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:56.398 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:56.398 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:56.398 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:56.398 mke2fs 1.46.5 (30-Dec-2021) 00:07:56.655 Discarding device blocks: 0/522240 done 00:07:56.655 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:56.655 Filesystem UUID: ecd4afa7-58f8-4385-a918-7c3047e44a3d 00:07:56.655 Superblock backups stored on blocks: 00:07:56.655 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:56.655 00:07:56.655 Allocating group tables: 0/64 done 00:07:56.655 Writing inode tables: 0/64 done 00:07:56.655 Creating journal (8192 blocks): done 00:07:56.655 Writing superblocks and filesystem accounting information: 0/64 done 00:07:56.655 00:07:56.655 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:56.655 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:56.655 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:56.655 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:56.655 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:56.912 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:56.912 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:56.912 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:56.912 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 77832 00:07:56.912 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:56.912 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:56.912 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:56.912 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:56.912 00:07:56.912 real 0m0.346s 00:07:56.912 user 0m0.021s 00:07:56.912 sys 0m0.065s 00:07:56.912 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:56.912 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:56.912 ************************************ 00:07:56.912 END TEST filesystem_in_capsule_ext4 00:07:56.912 ************************************ 00:07:56.912 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:56.912 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:56.912 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:56.912 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.912 ************************************ 00:07:56.913 START TEST filesystem_in_capsule_btrfs 00:07:56.913 ************************************ 00:07:56.913 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:56.913 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:56.913 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:56.913 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:56.913 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:56.913 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:56.913 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:56.913 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:56.913 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:56.913 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:56.913 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:56.913 btrfs-progs v6.6.2 00:07:56.913 See https://btrfs.readthedocs.io for more information. 00:07:56.913 00:07:56.913 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:56.913 NOTE: several default settings have changed in version 5.15, please make sure 00:07:56.913 this does not affect your deployments: 00:07:56.913 - DUP for metadata (-m dup) 00:07:56.913 - enabled no-holes (-O no-holes) 00:07:56.913 - enabled free-space-tree (-R free-space-tree) 00:07:56.913 00:07:56.913 Label: (null) 00:07:56.913 UUID: 6a40bd03-b5d7-4aa4-86f8-792edccc17c7 00:07:56.913 Node size: 16384 00:07:56.913 Sector size: 4096 00:07:56.913 Filesystem size: 510.00MiB 00:07:56.913 Block group profiles: 00:07:56.913 Data: single 8.00MiB 00:07:56.913 Metadata: DUP 32.00MiB 00:07:56.913 System: DUP 8.00MiB 00:07:56.913 SSD detected: yes 00:07:56.913 Zoned device: no 00:07:56.913 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:56.913 Runtime features: free-space-tree 00:07:56.913 Checksum: crc32c 00:07:56.913 Number of devices: 1 00:07:56.913 Devices: 00:07:56.913 ID SIZE PATH 00:07:56.913 1 510.00MiB /dev/nvme0n1p1 00:07:56.913 00:07:56.913 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:56.913 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:56.913 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:56.913 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 77832 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:57.171 00:07:57.171 real 0m0.217s 00:07:57.171 user 0m0.015s 00:07:57.171 sys 0m0.066s 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:57.171 ************************************ 00:07:57.171 END TEST filesystem_in_capsule_btrfs 00:07:57.171 ************************************ 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.171 ************************************ 00:07:57.171 START TEST filesystem_in_capsule_xfs 00:07:57.171 ************************************ 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:57.171 13:06:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:57.171 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:57.171 = sectsz=512 attr=2, projid32bit=1 00:07:57.171 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:57.171 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:57.171 data = bsize=4096 blocks=130560, imaxpct=25 00:07:57.171 = sunit=0 swidth=0 blks 00:07:57.171 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:57.171 log =internal log bsize=4096 blocks=16384, version=2 00:07:57.171 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:57.171 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:58.147 Discarding blocks...Done. 00:07:58.147 13:06:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:58.147 13:06:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:00.045 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 77832 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:00.046 00:08:00.046 real 0m2.611s 00:08:00.046 user 0m0.023s 00:08:00.046 sys 0m0.049s 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:00.046 ************************************ 00:08:00.046 END TEST filesystem_in_capsule_xfs 00:08:00.046 ************************************ 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:00.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 77832 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 77832 ']' 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 77832 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77832 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:00.046 killing process with pid 77832 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77832' 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 77832 00:08:00.046 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 77832 00:08:00.336 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:00.336 00:08:00.336 real 0m8.564s 00:08:00.336 user 0m32.310s 00:08:00.336 sys 0m1.537s 00:08:00.336 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:00.336 ************************************ 00:08:00.336 END TEST nvmf_filesystem_in_capsule 00:08:00.336 ************************************ 00:08:00.336 13:06:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.336 13:06:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:00.336 13:06:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:00.336 13:06:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:00.336 13:06:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:00.336 13:06:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:00.336 13:06:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:00.336 13:06:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:00.336 rmmod nvme_tcp 00:08:00.336 rmmod nvme_fabrics 00:08:00.336 rmmod nvme_keyring 00:08:00.336 13:06:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:00.336 13:06:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:00.336 13:06:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:00.336 13:06:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:00.336 13:06:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:00.336 13:06:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:00.336 13:06:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:00.336 13:06:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:00.336 13:06:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:00.336 13:06:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.336 13:06:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.336 13:06:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.606 13:06:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:00.606 00:08:00.606 real 0m18.887s 00:08:00.606 user 1m8.371s 00:08:00.606 sys 0m3.594s 00:08:00.606 ************************************ 00:08:00.606 END TEST nvmf_filesystem 00:08:00.606 ************************************ 00:08:00.606 13:06:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:00.606 13:06:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.606 13:06:57 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:00.606 13:06:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:00.606 13:06:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:00.606 13:06:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:00.606 ************************************ 00:08:00.606 START TEST nvmf_target_discovery 00:08:00.606 ************************************ 00:08:00.606 13:06:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:00.606 * Looking for test storage... 00:08:00.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:00.606 13:06:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:00.606 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:00.606 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.606 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.606 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.606 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:00.607 Cannot find device "nvmf_tgt_br" 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:00.607 Cannot find device "nvmf_tgt_br2" 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:00.607 Cannot find device "nvmf_tgt_br" 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:00.607 Cannot find device "nvmf_tgt_br2" 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:08:00.607 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:00.865 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:00.865 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:00.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:08:00.865 00:08:00.865 --- 10.0.0.2 ping statistics --- 00:08:00.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.865 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:08:00.865 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:01.123 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:01.123 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:08:01.123 00:08:01.123 --- 10.0.0.3 ping statistics --- 00:08:01.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.123 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:01.123 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:01.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:08:01.123 00:08:01.123 --- 10.0.0.1 ping statistics --- 00:08:01.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.123 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:01.123 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.123 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:08:01.123 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:01.123 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.123 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:01.123 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:01.123 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.123 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:01.123 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:01.123 13:06:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:01.123 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:01.123 13:06:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:01.123 13:06:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.123 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=78286 00:08:01.123 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 78286 00:08:01.123 13:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:01.123 13:06:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 78286 ']' 00:08:01.124 13:06:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.124 13:06:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:01.124 13:06:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.124 13:06:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:01.124 13:06:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.124 [2024-07-15 13:06:57.717955] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:01.124 [2024-07-15 13:06:57.718045] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.124 [2024-07-15 13:06:57.859467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.381 [2024-07-15 13:06:57.960473] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.381 [2024-07-15 13:06:57.960540] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.381 [2024-07-15 13:06:57.960552] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.381 [2024-07-15 13:06:57.960560] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.381 [2024-07-15 13:06:57.960567] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.381 [2024-07-15 13:06:57.960731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.381 [2024-07-15 13:06:57.960940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.381 [2024-07-15 13:06:57.961465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.381 [2024-07-15 13:06:57.961468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.945 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:01.945 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:01.945 13:06:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:01.945 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:01.945 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.945 13:06:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.945 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:01.945 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.945 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.945 [2024-07-15 13:06:58.678948] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 Null1 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 [2024-07-15 13:06:58.735343] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 Null2 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 Null3 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 Null4 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -a 10.0.0.2 -s 4420 00:08:02.204 00:08:02.204 Discovery Log Number of Records 6, Generation counter 6 00:08:02.204 =====Discovery Log Entry 0====== 00:08:02.204 trtype: tcp 00:08:02.204 adrfam: ipv4 00:08:02.204 subtype: current discovery subsystem 00:08:02.204 treq: not required 00:08:02.204 portid: 0 00:08:02.204 trsvcid: 4420 00:08:02.204 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:02.204 traddr: 10.0.0.2 00:08:02.204 eflags: explicit discovery connections, duplicate discovery information 00:08:02.204 sectype: none 00:08:02.204 =====Discovery Log Entry 1====== 00:08:02.204 trtype: tcp 00:08:02.204 adrfam: ipv4 00:08:02.204 subtype: nvme subsystem 00:08:02.204 treq: not required 00:08:02.204 portid: 0 00:08:02.204 trsvcid: 4420 00:08:02.204 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:02.204 traddr: 10.0.0.2 00:08:02.204 eflags: none 00:08:02.204 sectype: none 00:08:02.204 =====Discovery Log Entry 2====== 00:08:02.204 trtype: tcp 00:08:02.204 adrfam: ipv4 00:08:02.204 subtype: nvme subsystem 00:08:02.204 treq: not required 00:08:02.204 portid: 0 00:08:02.204 trsvcid: 4420 00:08:02.204 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:02.204 traddr: 10.0.0.2 00:08:02.204 eflags: none 00:08:02.204 sectype: none 00:08:02.204 =====Discovery Log Entry 3====== 00:08:02.204 trtype: tcp 00:08:02.204 adrfam: ipv4 00:08:02.204 subtype: nvme subsystem 00:08:02.204 treq: not required 00:08:02.204 portid: 0 00:08:02.204 trsvcid: 4420 00:08:02.204 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:02.204 traddr: 10.0.0.2 00:08:02.204 eflags: none 00:08:02.204 sectype: none 00:08:02.204 =====Discovery Log Entry 4====== 00:08:02.204 trtype: tcp 00:08:02.204 adrfam: ipv4 00:08:02.204 subtype: nvme subsystem 00:08:02.204 treq: not required 00:08:02.204 portid: 0 00:08:02.204 trsvcid: 4420 00:08:02.204 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:02.204 traddr: 10.0.0.2 00:08:02.204 eflags: none 00:08:02.204 sectype: none 00:08:02.204 =====Discovery Log Entry 5====== 00:08:02.204 trtype: tcp 00:08:02.204 adrfam: ipv4 00:08:02.204 subtype: discovery subsystem referral 00:08:02.204 treq: not required 00:08:02.204 portid: 0 00:08:02.204 trsvcid: 4430 00:08:02.204 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:02.204 traddr: 10.0.0.2 00:08:02.204 eflags: none 00:08:02.204 sectype: none 00:08:02.204 Perform nvmf subsystem discovery via RPC 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.204 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.204 [ 00:08:02.204 { 00:08:02.205 "allow_any_host": true, 00:08:02.205 "hosts": [], 00:08:02.205 "listen_addresses": [ 00:08:02.205 { 00:08:02.205 "adrfam": "IPv4", 00:08:02.205 "traddr": "10.0.0.2", 00:08:02.205 "trsvcid": "4420", 00:08:02.205 "trtype": "TCP" 00:08:02.205 } 00:08:02.205 ], 00:08:02.205 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:02.205 "subtype": "Discovery" 00:08:02.205 }, 00:08:02.205 { 00:08:02.205 "allow_any_host": true, 00:08:02.205 "hosts": [], 00:08:02.205 "listen_addresses": [ 00:08:02.205 { 00:08:02.205 "adrfam": "IPv4", 00:08:02.205 "traddr": "10.0.0.2", 00:08:02.205 "trsvcid": "4420", 00:08:02.205 "trtype": "TCP" 00:08:02.205 } 00:08:02.205 ], 00:08:02.205 "max_cntlid": 65519, 00:08:02.205 "max_namespaces": 32, 00:08:02.205 "min_cntlid": 1, 00:08:02.205 "model_number": "SPDK bdev Controller", 00:08:02.205 "namespaces": [ 00:08:02.205 { 00:08:02.205 "bdev_name": "Null1", 00:08:02.205 "name": "Null1", 00:08:02.205 "nguid": "742D179BE8CF426FAA2CB6AF4E5F01F3", 00:08:02.205 "nsid": 1, 00:08:02.205 "uuid": "742d179b-e8cf-426f-aa2c-b6af4e5f01f3" 00:08:02.205 } 00:08:02.205 ], 00:08:02.205 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:02.205 "serial_number": "SPDK00000000000001", 00:08:02.205 "subtype": "NVMe" 00:08:02.205 }, 00:08:02.205 { 00:08:02.205 "allow_any_host": true, 00:08:02.205 "hosts": [], 00:08:02.205 "listen_addresses": [ 00:08:02.205 { 00:08:02.205 "adrfam": "IPv4", 00:08:02.205 "traddr": "10.0.0.2", 00:08:02.205 "trsvcid": "4420", 00:08:02.205 "trtype": "TCP" 00:08:02.205 } 00:08:02.205 ], 00:08:02.205 "max_cntlid": 65519, 00:08:02.205 "max_namespaces": 32, 00:08:02.205 "min_cntlid": 1, 00:08:02.205 "model_number": "SPDK bdev Controller", 00:08:02.205 "namespaces": [ 00:08:02.205 { 00:08:02.205 "bdev_name": "Null2", 00:08:02.205 "name": "Null2", 00:08:02.205 "nguid": "4E9EED9810274A91B2C4A47C3DD97FA4", 00:08:02.205 "nsid": 1, 00:08:02.205 "uuid": "4e9eed98-1027-4a91-b2c4-a47c3dd97fa4" 00:08:02.205 } 00:08:02.205 ], 00:08:02.205 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:02.205 "serial_number": "SPDK00000000000002", 00:08:02.205 "subtype": "NVMe" 00:08:02.205 }, 00:08:02.205 { 00:08:02.205 "allow_any_host": true, 00:08:02.205 "hosts": [], 00:08:02.205 "listen_addresses": [ 00:08:02.205 { 00:08:02.205 "adrfam": "IPv4", 00:08:02.205 "traddr": "10.0.0.2", 00:08:02.205 "trsvcid": "4420", 00:08:02.205 "trtype": "TCP" 00:08:02.205 } 00:08:02.205 ], 00:08:02.205 "max_cntlid": 65519, 00:08:02.205 "max_namespaces": 32, 00:08:02.205 "min_cntlid": 1, 00:08:02.205 "model_number": "SPDK bdev Controller", 00:08:02.205 "namespaces": [ 00:08:02.205 { 00:08:02.205 "bdev_name": "Null3", 00:08:02.205 "name": "Null3", 00:08:02.205 "nguid": "2263096B39FF438B98DA1AC249CC8B65", 00:08:02.205 "nsid": 1, 00:08:02.205 "uuid": "2263096b-39ff-438b-98da-1ac249cc8b65" 00:08:02.205 } 00:08:02.205 ], 00:08:02.205 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:02.205 "serial_number": "SPDK00000000000003", 00:08:02.205 "subtype": "NVMe" 00:08:02.205 }, 00:08:02.205 { 00:08:02.205 "allow_any_host": true, 00:08:02.205 "hosts": [], 00:08:02.205 "listen_addresses": [ 00:08:02.205 { 00:08:02.205 "adrfam": "IPv4", 00:08:02.205 "traddr": "10.0.0.2", 00:08:02.205 "trsvcid": "4420", 00:08:02.205 "trtype": "TCP" 00:08:02.205 } 00:08:02.205 ], 00:08:02.205 "max_cntlid": 65519, 00:08:02.205 "max_namespaces": 32, 00:08:02.205 "min_cntlid": 1, 00:08:02.205 "model_number": "SPDK bdev Controller", 00:08:02.463 "namespaces": [ 00:08:02.463 { 00:08:02.463 "bdev_name": "Null4", 00:08:02.463 "name": "Null4", 00:08:02.463 "nguid": "4EE8D716427747E88B64F3939B6541AE", 00:08:02.463 "nsid": 1, 00:08:02.463 "uuid": "4ee8d716-4277-47e8-8b64-f3939b6541ae" 00:08:02.463 } 00:08:02.463 ], 00:08:02.463 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:02.463 "serial_number": "SPDK00000000000004", 00:08:02.463 "subtype": "NVMe" 00:08:02.463 } 00:08:02.463 ] 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:02.463 13:06:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:02.463 rmmod nvme_tcp 00:08:02.463 rmmod nvme_fabrics 00:08:02.463 rmmod nvme_keyring 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 78286 ']' 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 78286 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 78286 ']' 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 78286 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78286 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:02.463 killing process with pid 78286 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78286' 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 78286 00:08:02.463 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 78286 00:08:02.721 13:06:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:02.721 13:06:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:02.721 13:06:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:02.721 13:06:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:02.721 13:06:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:02.721 13:06:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.721 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.721 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.721 13:06:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:02.721 00:08:02.721 real 0m2.314s 00:08:02.721 user 0m6.043s 00:08:02.721 sys 0m0.623s 00:08:02.721 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.721 13:06:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.721 ************************************ 00:08:02.721 END TEST nvmf_target_discovery 00:08:02.721 ************************************ 00:08:02.980 13:06:59 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:02.980 13:06:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:02.980 13:06:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:02.980 13:06:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:02.980 ************************************ 00:08:02.980 START TEST nvmf_referrals 00:08:02.980 ************************************ 00:08:02.980 13:06:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:02.980 * Looking for test storage... 00:08:02.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:02.980 13:06:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:02.980 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:02.980 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.980 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.980 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.980 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.980 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.980 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:02.981 Cannot find device "nvmf_tgt_br" 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:02.981 Cannot find device "nvmf_tgt_br2" 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:02.981 Cannot find device "nvmf_tgt_br" 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:02.981 Cannot find device "nvmf_tgt_br2" 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:08:02.981 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:03.240 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:03.240 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:03.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:08:03.240 00:08:03.240 --- 10.0.0.2 ping statistics --- 00:08:03.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.240 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:08:03.240 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:03.501 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:03.501 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:08:03.501 00:08:03.501 --- 10.0.0.3 ping statistics --- 00:08:03.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.501 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:03.501 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:03.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:08:03.501 00:08:03.501 --- 10.0.0.1 ping statistics --- 00:08:03.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.501 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:03.501 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.501 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:08:03.501 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:03.501 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.501 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:03.501 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:03.501 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.501 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:03.501 13:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:03.501 13:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:03.501 13:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:03.501 13:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:03.501 13:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:03.501 13:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=78514 00:08:03.501 13:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:03.501 13:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 78514 00:08:03.501 13:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 78514 ']' 00:08:03.501 13:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.501 13:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:03.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.501 13:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.501 13:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:03.501 13:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:03.501 [2024-07-15 13:07:00.070997] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:03.501 [2024-07-15 13:07:00.071088] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.501 [2024-07-15 13:07:00.209098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.760 [2024-07-15 13:07:00.310733] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.760 [2024-07-15 13:07:00.310784] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.760 [2024-07-15 13:07:00.310795] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.760 [2024-07-15 13:07:00.310804] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.760 [2024-07-15 13:07:00.310812] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.760 [2024-07-15 13:07:00.310927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.760 [2024-07-15 13:07:00.311297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.760 [2024-07-15 13:07:00.311868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.760 [2024-07-15 13:07:00.311879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.371 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:04.371 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:04.372 13:07:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:04.372 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:04.372 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.629 13:07:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.629 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:04.629 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.629 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.629 [2024-07-15 13:07:01.142267] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.629 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.629 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:04.629 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.629 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.629 [2024-07-15 13:07:01.167365] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:04.630 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:04.887 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:05.144 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:05.145 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:05.145 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.145 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:05.145 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.145 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:05.145 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:05.145 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:05.145 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.145 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:05.145 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:05.145 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:05.401 13:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.401 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:05.401 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:05.401 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:05.401 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:05.401 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:05.401 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:05.401 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.401 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:05.401 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:05.401 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:05.401 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:05.401 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:05.401 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:05.401 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.401 13:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:05.401 13:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:05.401 13:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:05.401 13:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:05.401 13:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:05.401 13:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.401 13:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:05.401 13:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:05.401 13:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:05.401 13:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.401 13:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:05.659 13:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.659 13:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:05.659 13:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.659 13:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:05.659 13:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:05.659 13:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.659 13:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:05.659 13:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:05.659 13:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:05.660 rmmod nvme_tcp 00:08:05.660 rmmod nvme_fabrics 00:08:05.660 rmmod nvme_keyring 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 78514 ']' 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 78514 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 78514 ']' 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 78514 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:05.660 13:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78514 00:08:05.928 13:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:05.928 killing process with pid 78514 00:08:05.928 13:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:05.928 13:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78514' 00:08:05.928 13:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 78514 00:08:05.928 13:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 78514 00:08:05.928 13:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:05.928 13:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:05.928 13:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:05.928 13:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:05.928 13:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:05.928 13:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.928 13:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.928 13:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.928 13:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:05.928 00:08:05.928 real 0m3.147s 00:08:05.928 user 0m10.174s 00:08:05.928 sys 0m0.889s 00:08:05.928 13:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:05.928 13:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:05.928 ************************************ 00:08:05.928 END TEST nvmf_referrals 00:08:05.928 ************************************ 00:08:06.188 13:07:02 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:06.188 13:07:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:06.188 13:07:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:06.188 13:07:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:06.188 ************************************ 00:08:06.188 START TEST nvmf_connect_disconnect 00:08:06.188 ************************************ 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:06.188 * Looking for test storage... 00:08:06.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:06.188 Cannot find device "nvmf_tgt_br" 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:06.188 Cannot find device "nvmf_tgt_br2" 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:06.188 Cannot find device "nvmf_tgt_br" 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:08:06.188 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:06.188 Cannot find device "nvmf_tgt_br2" 00:08:06.189 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:08:06.189 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:06.189 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:06.189 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:06.189 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:06.189 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:08:06.189 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:06.189 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:06.189 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:08:06.189 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:06.446 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:06.446 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:06.446 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:06.446 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:06.446 13:07:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:06.446 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:06.446 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:06.446 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:06.446 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:06.446 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:06.446 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:06.446 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:06.446 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:06.446 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:06.446 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:06.446 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:06.446 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:06.446 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:06.446 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:06.446 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:06.446 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:06.446 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:06.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:08:06.447 00:08:06.447 --- 10.0.0.2 ping statistics --- 00:08:06.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.447 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:06.447 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:06.447 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:08:06.447 00:08:06.447 --- 10.0.0.3 ping statistics --- 00:08:06.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.447 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:06.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:08:06.447 00:08:06.447 --- 10.0.0.1 ping statistics --- 00:08:06.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.447 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=78820 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 78820 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 78820 ']' 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:06.447 13:07:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:06.706 [2024-07-15 13:07:03.218192] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:06.706 [2024-07-15 13:07:03.218316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.706 [2024-07-15 13:07:03.351623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.963 [2024-07-15 13:07:03.465858] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.963 [2024-07-15 13:07:03.466167] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.963 [2024-07-15 13:07:03.466403] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.963 [2024-07-15 13:07:03.466623] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.963 [2024-07-15 13:07:03.466745] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.963 [2024-07-15 13:07:03.466959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.963 [2024-07-15 13:07:03.467045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.963 [2024-07-15 13:07:03.467583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.963 [2024-07-15 13:07:03.467645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.530 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:07.530 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:07.530 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:07.530 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:07.530 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:07.530 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.530 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:07.530 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.530 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:07.530 [2024-07-15 13:07:04.255378] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:07.823 [2024-07-15 13:07:04.336236] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:07.823 13:07:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:10.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.422 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:51.422 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:51.422 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:51.422 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:11:51.422 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:51.422 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:11:51.422 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:51.422 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:51.422 rmmod nvme_tcp 00:11:51.422 rmmod nvme_fabrics 00:11:51.422 rmmod nvme_keyring 00:11:51.681 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:51.681 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:11:51.681 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:11:51.681 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 78820 ']' 00:11:51.681 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 78820 00:11:51.681 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 78820 ']' 00:11:51.681 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 78820 00:11:51.681 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:11:51.681 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:51.681 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78820 00:11:51.681 killing process with pid 78820 00:11:51.681 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:51.681 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:51.681 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78820' 00:11:51.681 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 78820 00:11:51.681 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 78820 00:11:51.939 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:51.939 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:51.939 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:51.939 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:51.939 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:51.939 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.939 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:51.939 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.939 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:51.939 00:11:51.939 real 3m45.791s 00:11:51.939 user 14m34.126s 00:11:51.939 sys 0m26.031s 00:11:51.939 ************************************ 00:11:51.939 END TEST nvmf_connect_disconnect 00:11:51.939 ************************************ 00:11:51.939 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:51.939 13:10:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.939 13:10:48 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:51.939 13:10:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:51.939 13:10:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:51.939 13:10:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:51.939 ************************************ 00:11:51.940 START TEST nvmf_multitarget 00:11:51.940 ************************************ 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:51.940 * Looking for test storage... 00:11:51.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:51.940 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:52.198 Cannot find device "nvmf_tgt_br" 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:52.198 Cannot find device "nvmf_tgt_br2" 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:52.198 Cannot find device "nvmf_tgt_br" 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:52.198 Cannot find device "nvmf_tgt_br2" 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:52.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:52.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:52.198 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:52.456 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:52.456 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:52.456 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:52.456 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:52.456 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:52.456 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:52.456 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:52.456 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:52.456 13:10:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:52.456 13:10:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:52.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:11:52.456 00:11:52.456 --- 10.0.0.2 ping statistics --- 00:11:52.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.456 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:11:52.456 13:10:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:52.456 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:52.456 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:11:52.456 00:11:52.456 --- 10.0.0.3 ping statistics --- 00:11:52.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.456 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:52.456 13:10:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:52.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:11:52.456 00:11:52.456 --- 10.0.0.1 ping statistics --- 00:11:52.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.456 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:11:52.456 13:10:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.456 13:10:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:11:52.456 13:10:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:52.456 13:10:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.456 13:10:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:52.456 13:10:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:52.456 13:10:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.456 13:10:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:52.456 13:10:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:52.456 13:10:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:52.456 13:10:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:52.457 13:10:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:52.457 13:10:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:52.457 13:10:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=82587 00:11:52.457 13:10:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 82587 00:11:52.457 13:10:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.457 13:10:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 82587 ']' 00:11:52.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.457 13:10:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.457 13:10:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:52.457 13:10:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.457 13:10:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:52.457 13:10:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:52.457 [2024-07-15 13:10:49.094749] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:52.457 [2024-07-15 13:10:49.094838] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.714 [2024-07-15 13:10:49.233039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.714 [2024-07-15 13:10:49.329948] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.714 [2024-07-15 13:10:49.330154] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.714 [2024-07-15 13:10:49.330368] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.714 [2024-07-15 13:10:49.330488] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.714 [2024-07-15 13:10:49.330661] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.714 [2024-07-15 13:10:49.330837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.714 [2024-07-15 13:10:49.330928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.714 [2024-07-15 13:10:49.331156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.714 [2024-07-15 13:10:49.331877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.647 13:10:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:53.647 13:10:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:11:53.647 13:10:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:53.647 13:10:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.647 13:10:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:53.647 13:10:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.647 13:10:50 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:53.647 13:10:50 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:53.647 13:10:50 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:53.647 13:10:50 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:53.647 13:10:50 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:53.905 "nvmf_tgt_1" 00:11:53.905 13:10:50 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:53.905 "nvmf_tgt_2" 00:11:53.905 13:10:50 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:53.905 13:10:50 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:54.163 13:10:50 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:54.163 13:10:50 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:54.163 true 00:11:54.163 13:10:50 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:54.420 true 00:11:54.420 13:10:50 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:54.421 13:10:50 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:54.421 13:10:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:54.421 13:10:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:54.421 13:10:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:54.421 13:10:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:54.421 13:10:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:11:54.421 13:10:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:54.421 13:10:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:11:54.421 13:10:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:54.421 13:10:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:54.421 rmmod nvme_tcp 00:11:54.421 rmmod nvme_fabrics 00:11:54.421 rmmod nvme_keyring 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 82587 ']' 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 82587 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 82587 ']' 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 82587 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82587 00:11:54.678 killing process with pid 82587 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82587' 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 82587 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 82587 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:54.678 13:10:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.935 13:10:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:54.935 00:11:54.935 real 0m2.904s 00:11:54.935 user 0m9.490s 00:11:54.935 sys 0m0.701s 00:11:54.935 13:10:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:54.935 13:10:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:54.935 ************************************ 00:11:54.935 END TEST nvmf_multitarget 00:11:54.935 ************************************ 00:11:54.935 13:10:51 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:54.935 13:10:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:54.935 13:10:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:54.935 13:10:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:54.935 ************************************ 00:11:54.935 START TEST nvmf_rpc 00:11:54.935 ************************************ 00:11:54.935 13:10:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:54.935 * Looking for test storage... 00:11:54.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:54.935 13:10:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:54.935 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:54.935 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.935 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.935 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.935 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.935 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.935 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.935 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.935 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.935 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.935 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.935 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:11:54.935 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:11:54.935 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.935 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.935 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:54.935 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:54.936 Cannot find device "nvmf_tgt_br" 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:54.936 Cannot find device "nvmf_tgt_br2" 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:54.936 Cannot find device "nvmf_tgt_br" 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:11:54.936 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:55.193 Cannot find device "nvmf_tgt_br2" 00:11:55.193 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:11:55.193 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:55.193 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:55.193 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:55.193 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:55.193 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:11:55.193 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:55.193 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:55.193 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:11:55.193 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:55.193 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:55.193 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:55.193 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:55.193 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:55.193 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:55.193 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:55.193 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:55.193 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:55.193 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:55.193 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:55.193 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:55.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:11:55.194 00:11:55.194 --- 10.0.0.2 ping statistics --- 00:11:55.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.194 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:55.194 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:55.194 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:11:55.194 00:11:55.194 --- 10.0.0.3 ping statistics --- 00:11:55.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.194 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:55.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:11:55.194 00:11:55.194 --- 10.0.0.1 ping statistics --- 00:11:55.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.194 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:55.194 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:55.452 13:10:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:55.452 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:55.452 13:10:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:55.452 13:10:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.452 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=82820 00:11:55.452 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 82820 00:11:55.452 13:10:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 82820 ']' 00:11:55.452 13:10:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:55.452 13:10:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.452 13:10:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:55.452 13:10:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.452 13:10:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:55.452 13:10:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.452 [2024-07-15 13:10:51.993097] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:55.452 [2024-07-15 13:10:51.993196] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.452 [2024-07-15 13:10:52.136767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.710 [2024-07-15 13:10:52.230320] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.710 [2024-07-15 13:10:52.230371] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.710 [2024-07-15 13:10:52.230384] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.710 [2024-07-15 13:10:52.230392] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.710 [2024-07-15 13:10:52.230400] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.710 [2024-07-15 13:10:52.230953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.710 [2024-07-15 13:10:52.231096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.710 [2024-07-15 13:10:52.231173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.710 [2024-07-15 13:10:52.231178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:56.645 "poll_groups": [ 00:11:56.645 { 00:11:56.645 "admin_qpairs": 0, 00:11:56.645 "completed_nvme_io": 0, 00:11:56.645 "current_admin_qpairs": 0, 00:11:56.645 "current_io_qpairs": 0, 00:11:56.645 "io_qpairs": 0, 00:11:56.645 "name": "nvmf_tgt_poll_group_000", 00:11:56.645 "pending_bdev_io": 0, 00:11:56.645 "transports": [] 00:11:56.645 }, 00:11:56.645 { 00:11:56.645 "admin_qpairs": 0, 00:11:56.645 "completed_nvme_io": 0, 00:11:56.645 "current_admin_qpairs": 0, 00:11:56.645 "current_io_qpairs": 0, 00:11:56.645 "io_qpairs": 0, 00:11:56.645 "name": "nvmf_tgt_poll_group_001", 00:11:56.645 "pending_bdev_io": 0, 00:11:56.645 "transports": [] 00:11:56.645 }, 00:11:56.645 { 00:11:56.645 "admin_qpairs": 0, 00:11:56.645 "completed_nvme_io": 0, 00:11:56.645 "current_admin_qpairs": 0, 00:11:56.645 "current_io_qpairs": 0, 00:11:56.645 "io_qpairs": 0, 00:11:56.645 "name": "nvmf_tgt_poll_group_002", 00:11:56.645 "pending_bdev_io": 0, 00:11:56.645 "transports": [] 00:11:56.645 }, 00:11:56.645 { 00:11:56.645 "admin_qpairs": 0, 00:11:56.645 "completed_nvme_io": 0, 00:11:56.645 "current_admin_qpairs": 0, 00:11:56.645 "current_io_qpairs": 0, 00:11:56.645 "io_qpairs": 0, 00:11:56.645 "name": "nvmf_tgt_poll_group_003", 00:11:56.645 "pending_bdev_io": 0, 00:11:56.645 "transports": [] 00:11:56.645 } 00:11:56.645 ], 00:11:56.645 "tick_rate": 2200000000 00:11:56.645 }' 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.645 [2024-07-15 13:10:53.178304] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:56.645 "poll_groups": [ 00:11:56.645 { 00:11:56.645 "admin_qpairs": 0, 00:11:56.645 "completed_nvme_io": 0, 00:11:56.645 "current_admin_qpairs": 0, 00:11:56.645 "current_io_qpairs": 0, 00:11:56.645 "io_qpairs": 0, 00:11:56.645 "name": "nvmf_tgt_poll_group_000", 00:11:56.645 "pending_bdev_io": 0, 00:11:56.645 "transports": [ 00:11:56.645 { 00:11:56.645 "trtype": "TCP" 00:11:56.645 } 00:11:56.645 ] 00:11:56.645 }, 00:11:56.645 { 00:11:56.645 "admin_qpairs": 0, 00:11:56.645 "completed_nvme_io": 0, 00:11:56.645 "current_admin_qpairs": 0, 00:11:56.645 "current_io_qpairs": 0, 00:11:56.645 "io_qpairs": 0, 00:11:56.645 "name": "nvmf_tgt_poll_group_001", 00:11:56.645 "pending_bdev_io": 0, 00:11:56.645 "transports": [ 00:11:56.645 { 00:11:56.645 "trtype": "TCP" 00:11:56.645 } 00:11:56.645 ] 00:11:56.645 }, 00:11:56.645 { 00:11:56.645 "admin_qpairs": 0, 00:11:56.645 "completed_nvme_io": 0, 00:11:56.645 "current_admin_qpairs": 0, 00:11:56.645 "current_io_qpairs": 0, 00:11:56.645 "io_qpairs": 0, 00:11:56.645 "name": "nvmf_tgt_poll_group_002", 00:11:56.645 "pending_bdev_io": 0, 00:11:56.645 "transports": [ 00:11:56.645 { 00:11:56.645 "trtype": "TCP" 00:11:56.645 } 00:11:56.645 ] 00:11:56.645 }, 00:11:56.645 { 00:11:56.645 "admin_qpairs": 0, 00:11:56.645 "completed_nvme_io": 0, 00:11:56.645 "current_admin_qpairs": 0, 00:11:56.645 "current_io_qpairs": 0, 00:11:56.645 "io_qpairs": 0, 00:11:56.645 "name": "nvmf_tgt_poll_group_003", 00:11:56.645 "pending_bdev_io": 0, 00:11:56.645 "transports": [ 00:11:56.645 { 00:11:56.645 "trtype": "TCP" 00:11:56.645 } 00:11:56.645 ] 00:11:56.645 } 00:11:56.645 ], 00:11:56.645 "tick_rate": 2200000000 00:11:56.645 }' 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.645 Malloc1 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.645 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.645 [2024-07-15 13:10:53.382203] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.904 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.904 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -a 10.0.0.2 -s 4420 00:11:56.904 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:56.904 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -a 10.0.0.2 -s 4420 00:11:56.904 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:56.904 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:56.904 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:56.904 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:56.904 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:56.904 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:56.904 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:56.904 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:56.904 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -a 10.0.0.2 -s 4420 00:11:56.905 [2024-07-15 13:10:53.410455] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02' 00:11:56.905 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:56.905 could not add new controller: failed to write to nvme-fabrics device 00:11:56.905 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:56.905 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:56.905 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:56.905 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:56.905 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:11:56.905 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.905 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.905 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.905 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:56.905 13:10:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:56.905 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:11:56.905 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:56.905 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:11:56.905 13:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:59.435 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:59.436 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:59.436 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:59.436 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:59.436 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:59.436 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:59.436 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:59.436 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:59.436 [2024-07-15 13:10:55.701592] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02' 00:11:59.436 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:59.436 could not add new controller: failed to write to nvme-fabrics device 00:11:59.436 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:59.436 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:59.436 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:59.436 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:59.436 13:10:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:59.436 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.436 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.436 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.436 13:10:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:59.436 13:10:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:59.436 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:11:59.436 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.436 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:11:59.436 13:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:01.335 13:10:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:01.335 13:10:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:01.335 13:10:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.335 13:10:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:01.335 13:10:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.335 13:10:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:01.335 13:10:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:01.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.335 13:10:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:01.335 13:10:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:01.335 13:10:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:01.335 13:10:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.335 13:10:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:01.335 13:10:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.335 13:10:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:01.335 13:10:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.336 13:10:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.336 13:10:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.336 13:10:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.336 13:10:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:01.336 13:10:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:01.336 13:10:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:01.336 13:10:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.336 13:10:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.336 13:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.336 13:10:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.336 13:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.336 13:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.336 [2024-07-15 13:10:58.008623] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.336 13:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.336 13:10:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:01.336 13:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.336 13:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.336 13:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.336 13:10:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:01.336 13:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.336 13:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.336 13:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.336 13:10:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:01.594 13:10:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:01.594 13:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:01.594 13:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.594 13:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:01.594 13:10:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:03.495 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:03.495 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:03.495 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.495 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:03.495 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.495 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:03.495 13:11:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:03.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.753 [2024-07-15 13:11:00.428741] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.753 13:11:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:04.012 13:11:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:04.012 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:04.012 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.012 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:04.012 13:11:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:05.910 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:05.910 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:05.910 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.910 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:05.910 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.910 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:05.910 13:11:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.168 [2024-07-15 13:11:02.742140] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.168 13:11:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:06.426 13:11:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:06.426 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:06.426 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:06.426 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:06.426 13:11:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:08.338 13:11:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:08.338 13:11:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:08.338 13:11:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:08.338 13:11:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:08.338 13:11:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:08.338 13:11:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:08.338 13:11:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:08.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.338 13:11:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:08.338 13:11:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:08.338 13:11:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:08.338 13:11:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.338 [2024-07-15 13:11:05.049837] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.338 13:11:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.596 13:11:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:08.596 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:08.596 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.596 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:08.596 13:11:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:11.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.128 [2024-07-15 13:11:07.349274] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:11.128 13:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.031 [2024-07-15 13:11:09.757974] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.031 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.383 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.383 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:13.383 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.383 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.383 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.383 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.383 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.383 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.383 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.383 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.383 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.383 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.383 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.383 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:13.383 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:13.383 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 [2024-07-15 13:11:09.806047] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 [2024-07-15 13:11:09.854130] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 [2024-07-15 13:11:09.906273] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 [2024-07-15 13:11:09.954326] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.384 13:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 13:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.385 13:11:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:13.385 "poll_groups": [ 00:12:13.385 { 00:12:13.385 "admin_qpairs": 2, 00:12:13.385 "completed_nvme_io": 66, 00:12:13.385 "current_admin_qpairs": 0, 00:12:13.385 "current_io_qpairs": 0, 00:12:13.385 "io_qpairs": 16, 00:12:13.385 "name": "nvmf_tgt_poll_group_000", 00:12:13.385 "pending_bdev_io": 0, 00:12:13.385 "transports": [ 00:12:13.385 { 00:12:13.385 "trtype": "TCP" 00:12:13.385 } 00:12:13.385 ] 00:12:13.385 }, 00:12:13.385 { 00:12:13.385 "admin_qpairs": 3, 00:12:13.385 "completed_nvme_io": 67, 00:12:13.385 "current_admin_qpairs": 0, 00:12:13.385 "current_io_qpairs": 0, 00:12:13.385 "io_qpairs": 17, 00:12:13.385 "name": "nvmf_tgt_poll_group_001", 00:12:13.385 "pending_bdev_io": 0, 00:12:13.385 "transports": [ 00:12:13.385 { 00:12:13.385 "trtype": "TCP" 00:12:13.385 } 00:12:13.385 ] 00:12:13.385 }, 00:12:13.385 { 00:12:13.385 "admin_qpairs": 1, 00:12:13.385 "completed_nvme_io": 120, 00:12:13.385 "current_admin_qpairs": 0, 00:12:13.385 "current_io_qpairs": 0, 00:12:13.385 "io_qpairs": 19, 00:12:13.385 "name": "nvmf_tgt_poll_group_002", 00:12:13.385 "pending_bdev_io": 0, 00:12:13.385 "transports": [ 00:12:13.385 { 00:12:13.385 "trtype": "TCP" 00:12:13.385 } 00:12:13.385 ] 00:12:13.385 }, 00:12:13.385 { 00:12:13.385 "admin_qpairs": 1, 00:12:13.385 "completed_nvme_io": 167, 00:12:13.385 "current_admin_qpairs": 0, 00:12:13.385 "current_io_qpairs": 0, 00:12:13.385 "io_qpairs": 18, 00:12:13.385 "name": "nvmf_tgt_poll_group_003", 00:12:13.385 "pending_bdev_io": 0, 00:12:13.385 "transports": [ 00:12:13.385 { 00:12:13.385 "trtype": "TCP" 00:12:13.385 } 00:12:13.385 ] 00:12:13.385 } 00:12:13.385 ], 00:12:13.385 "tick_rate": 2200000000 00:12:13.385 }' 00:12:13.385 13:11:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:13.385 13:11:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:13.385 13:11:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:13.385 13:11:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:13.385 13:11:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:13.385 13:11:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:13.385 13:11:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:13.385 13:11:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:13.385 13:11:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:13.385 13:11:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:13.385 13:11:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:13.385 13:11:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:13.385 13:11:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:13.385 13:11:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:13.385 13:11:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:13.644 13:11:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:13.644 13:11:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:13.644 13:11:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:13.644 13:11:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:13.644 rmmod nvme_tcp 00:12:13.644 rmmod nvme_fabrics 00:12:13.644 rmmod nvme_keyring 00:12:13.644 13:11:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:13.644 13:11:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:13.644 13:11:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:13.644 13:11:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 82820 ']' 00:12:13.644 13:11:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 82820 00:12:13.644 13:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 82820 ']' 00:12:13.644 13:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 82820 00:12:13.644 13:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:12:13.644 13:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:13.644 13:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82820 00:12:13.644 13:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:13.644 13:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:13.644 13:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82820' 00:12:13.644 killing process with pid 82820 00:12:13.644 13:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 82820 00:12:13.644 13:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 82820 00:12:13.904 13:11:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:13.904 13:11:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:13.904 13:11:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:13.904 13:11:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:13.904 13:11:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:13.904 13:11:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.904 13:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.904 13:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.904 13:11:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:13.904 ************************************ 00:12:13.904 END TEST nvmf_rpc 00:12:13.904 ************************************ 00:12:13.904 00:12:13.904 real 0m19.001s 00:12:13.904 user 1m11.701s 00:12:13.904 sys 0m2.446s 00:12:13.904 13:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:13.904 13:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.904 13:11:10 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:13.904 13:11:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:13.904 13:11:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:13.904 13:11:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:13.904 ************************************ 00:12:13.904 START TEST nvmf_invalid 00:12:13.904 ************************************ 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:13.904 * Looking for test storage... 00:12:13.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:13.904 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:14.163 Cannot find device "nvmf_tgt_br" 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:14.163 Cannot find device "nvmf_tgt_br2" 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:14.163 Cannot find device "nvmf_tgt_br" 00:12:14.163 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:12:14.164 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:14.164 Cannot find device "nvmf_tgt_br2" 00:12:14.164 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:12:14.164 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:14.164 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:14.164 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:14.164 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:14.164 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:12:14.164 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:14.164 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:14.164 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:12:14.164 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:14.164 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:14.164 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:14.164 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:14.164 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:14.164 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:14.164 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:14.164 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:14.164 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:14.164 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:14.422 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:14.422 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:14.422 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:14.422 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:14.422 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:14.422 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:14.422 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:14.422 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:14.422 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:14.422 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:14.422 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:14.422 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:14.422 13:11:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:14.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:12:14.423 00:12:14.423 --- 10.0.0.2 ping statistics --- 00:12:14.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.423 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:14.423 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:14.423 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:12:14.423 00:12:14.423 --- 10.0.0.3 ping statistics --- 00:12:14.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.423 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:14.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:14.423 00:12:14.423 --- 10.0.0.1 ping statistics --- 00:12:14.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.423 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=83333 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 83333 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 83333 ']' 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:14.423 13:11:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:14.423 [2024-07-15 13:11:11.102360] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:14.423 [2024-07-15 13:11:11.102476] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.682 [2024-07-15 13:11:11.242504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.682 [2024-07-15 13:11:11.341717] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.682 [2024-07-15 13:11:11.341772] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.682 [2024-07-15 13:11:11.341784] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.682 [2024-07-15 13:11:11.341792] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.682 [2024-07-15 13:11:11.341799] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.682 [2024-07-15 13:11:11.341902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.682 [2024-07-15 13:11:11.342649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.682 [2024-07-15 13:11:11.342702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.682 [2024-07-15 13:11:11.342707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.618 13:11:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:15.618 13:11:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:12:15.618 13:11:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:15.618 13:11:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.618 13:11:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:15.618 13:11:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.618 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:15.618 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20137 00:12:15.618 [2024-07-15 13:11:12.274097] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:15.618 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/15 13:11:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode20137 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:15.618 request: 00:12:15.618 { 00:12:15.618 "method": "nvmf_create_subsystem", 00:12:15.618 "params": { 00:12:15.618 "nqn": "nqn.2016-06.io.spdk:cnode20137", 00:12:15.618 "tgt_name": "foobar" 00:12:15.618 } 00:12:15.618 } 00:12:15.618 Got JSON-RPC error response 00:12:15.618 GoRPCClient: error on JSON-RPC call' 00:12:15.618 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/15 13:11:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode20137 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:15.618 request: 00:12:15.618 { 00:12:15.618 "method": "nvmf_create_subsystem", 00:12:15.618 "params": { 00:12:15.618 "nqn": "nqn.2016-06.io.spdk:cnode20137", 00:12:15.618 "tgt_name": "foobar" 00:12:15.618 } 00:12:15.618 } 00:12:15.618 Got JSON-RPC error response 00:12:15.619 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:15.619 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:15.619 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17710 00:12:15.877 [2024-07-15 13:11:12.566528] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17710: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:15.877 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/15 13:11:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode17710 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:15.877 request: 00:12:15.877 { 00:12:15.877 "method": "nvmf_create_subsystem", 00:12:15.877 "params": { 00:12:15.877 "nqn": "nqn.2016-06.io.spdk:cnode17710", 00:12:15.877 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:15.877 } 00:12:15.877 } 00:12:15.877 Got JSON-RPC error response 00:12:15.877 GoRPCClient: error on JSON-RPC call' 00:12:15.877 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/15 13:11:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode17710 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:15.877 request: 00:12:15.877 { 00:12:15.877 "method": "nvmf_create_subsystem", 00:12:15.877 "params": { 00:12:15.877 "nqn": "nqn.2016-06.io.spdk:cnode17710", 00:12:15.877 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:15.877 } 00:12:15.877 } 00:12:15.877 Got JSON-RPC error response 00:12:15.877 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:15.877 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:15.877 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30473 00:12:16.135 [2024-07-15 13:11:12.798912] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30473: invalid model number 'SPDK_Controller' 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/15 13:11:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode30473], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:16.135 request: 00:12:16.135 { 00:12:16.135 "method": "nvmf_create_subsystem", 00:12:16.135 "params": { 00:12:16.135 "nqn": "nqn.2016-06.io.spdk:cnode30473", 00:12:16.135 "model_number": "SPDK_Controller\u001f" 00:12:16.135 } 00:12:16.135 } 00:12:16.135 Got JSON-RPC error response 00:12:16.135 GoRPCClient: error on JSON-RPC call' 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/15 13:11:12 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode30473], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:16.135 request: 00:12:16.135 { 00:12:16.135 "method": "nvmf_create_subsystem", 00:12:16.135 "params": { 00:12:16.135 "nqn": "nqn.2016-06.io.spdk:cnode30473", 00:12:16.135 "model_number": "SPDK_Controller\u001f" 00:12:16.135 } 00:12:16.135 } 00:12:16.135 Got JSON-RPC error response 00:12:16.135 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.135 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.136 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:16.393 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 9 == \- ]] 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '9b`|k`4.Yuc*b{~2\d.xr' 00:12:16.394 13:11:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '9b`|k`4.Yuc*b{~2\d.xr' nqn.2016-06.io.spdk:cnode16600 00:12:16.666 [2024-07-15 13:11:13.175574] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16600: invalid serial number '9b`|k`4.Yuc*b{~2\d.xr' 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/15 13:11:13 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode16600 serial_number:9b`|k`4.Yuc*b{~2\d.xr], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 9b`|k`4.Yuc*b{~2\d.xr 00:12:16.666 request: 00:12:16.666 { 00:12:16.666 "method": "nvmf_create_subsystem", 00:12:16.666 "params": { 00:12:16.666 "nqn": "nqn.2016-06.io.spdk:cnode16600", 00:12:16.666 "serial_number": "9b`|k`4.Yuc*b{~2\\d.xr" 00:12:16.666 } 00:12:16.666 } 00:12:16.666 Got JSON-RPC error response 00:12:16.666 GoRPCClient: error on JSON-RPC call' 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/15 13:11:13 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode16600 serial_number:9b`|k`4.Yuc*b{~2\d.xr], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 9b`|k`4.Yuc*b{~2\d.xr 00:12:16.666 request: 00:12:16.666 { 00:12:16.666 "method": "nvmf_create_subsystem", 00:12:16.666 "params": { 00:12:16.666 "nqn": "nqn.2016-06.io.spdk:cnode16600", 00:12:16.666 "serial_number": "9b`|k`4.Yuc*b{~2\\d.xr" 00:12:16.666 } 00:12:16.666 } 00:12:16.666 Got JSON-RPC error response 00:12:16.666 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:16.666 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ E == \- ]] 00:12:16.667 13:11:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'ElO*6#E<~t~#lO*6#E<~t~#lO*6#E<~t~#lO*6#E<~t~#lO*6#E<~t~#lO*6#E<~t~#lO*6#E<~t~#lO*6#E<~t~#lO*6#E<~t~# /dev/null' 00:12:19.874 13:11:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.875 13:11:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:19.875 00:12:19.875 real 0m5.915s 00:12:19.875 user 0m23.410s 00:12:19.875 sys 0m1.272s 00:12:19.875 13:11:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:19.875 13:11:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:19.875 ************************************ 00:12:19.875 END TEST nvmf_invalid 00:12:19.875 ************************************ 00:12:19.875 13:11:16 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:19.875 13:11:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:19.875 13:11:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:19.875 13:11:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:19.875 ************************************ 00:12:19.875 START TEST nvmf_abort 00:12:19.875 ************************************ 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:19.875 * Looking for test storage... 00:12:19.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.875 13:11:16 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:20.133 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:20.134 Cannot find device "nvmf_tgt_br" 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:20.134 Cannot find device "nvmf_tgt_br2" 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:20.134 Cannot find device "nvmf_tgt_br" 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:20.134 Cannot find device "nvmf_tgt_br2" 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:20.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:20.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:20.134 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:20.391 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:20.391 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:20.391 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:20.391 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:20.391 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:20.391 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:20.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:12:20.391 00:12:20.391 --- 10.0.0.2 ping statistics --- 00:12:20.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.391 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:12:20.391 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:20.391 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:20.391 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:12:20.391 00:12:20.391 --- 10.0.0.3 ping statistics --- 00:12:20.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.392 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:20.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:12:20.392 00:12:20.392 --- 10.0.0.1 ping statistics --- 00:12:20.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.392 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=83841 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 83841 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 83841 ']' 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:20.392 13:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:20.392 [2024-07-15 13:11:17.022514] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:20.392 [2024-07-15 13:11:17.022625] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.649 [2024-07-15 13:11:17.158687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:20.649 [2024-07-15 13:11:17.267931] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.649 [2024-07-15 13:11:17.268023] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.649 [2024-07-15 13:11:17.268045] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.649 [2024-07-15 13:11:17.268059] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.649 [2024-07-15 13:11:17.268070] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.649 [2024-07-15 13:11:17.268630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.649 [2024-07-15 13:11:17.269091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.649 [2024-07-15 13:11:17.269124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:21.582 [2024-07-15 13:11:18.057898] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:21.582 Malloc0 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:21.582 Delay0 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:21.582 [2024-07-15 13:11:18.129744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.582 13:11:18 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:21.582 [2024-07-15 13:11:18.306020] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:24.106 Initializing NVMe Controllers 00:12:24.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:24.106 controller IO queue size 128 less than required 00:12:24.106 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:24.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:24.106 Initialization complete. Launching workers. 00:12:24.106 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34182 00:12:24.106 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34243, failed to submit 62 00:12:24.106 success 34186, unsuccess 57, failed 0 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:24.106 rmmod nvme_tcp 00:12:24.106 rmmod nvme_fabrics 00:12:24.106 rmmod nvme_keyring 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 83841 ']' 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 83841 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 83841 ']' 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 83841 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83841 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:24.106 killing process with pid 83841 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83841' 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 83841 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 83841 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:24.106 ************************************ 00:12:24.106 END TEST nvmf_abort 00:12:24.106 ************************************ 00:12:24.106 00:12:24.106 real 0m4.221s 00:12:24.106 user 0m12.194s 00:12:24.106 sys 0m1.043s 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:24.106 13:11:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:24.106 13:11:20 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:24.106 13:11:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:24.106 13:11:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:24.106 13:11:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:24.106 ************************************ 00:12:24.106 START TEST nvmf_ns_hotplug_stress 00:12:24.106 ************************************ 00:12:24.106 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:24.365 * Looking for test storage... 00:12:24.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:24.365 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:24.366 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:24.366 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:24.366 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:24.366 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:24.366 Cannot find device "nvmf_tgt_br" 00:12:24.366 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:12:24.366 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:24.366 Cannot find device "nvmf_tgt_br2" 00:12:24.366 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:12:24.366 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:24.366 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:24.366 Cannot find device "nvmf_tgt_br" 00:12:24.366 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:12:24.366 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:24.366 Cannot find device "nvmf_tgt_br2" 00:12:24.366 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:12:24.366 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:24.366 13:11:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:24.366 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:24.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:24.366 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:12:24.366 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:24.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:24.366 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:12:24.366 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:24.366 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:24.366 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:24.366 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:24.366 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:24.366 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:24.366 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:24.366 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:24.366 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:24.366 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:24.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:12:24.624 00:12:24.624 --- 10.0.0.2 ping statistics --- 00:12:24.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.624 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:24.624 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:24.624 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:12:24.624 00:12:24.624 --- 10.0.0.3 ping statistics --- 00:12:24.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.624 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:24.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:12:24.624 00:12:24.624 --- 10.0.0.1 ping statistics --- 00:12:24.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.624 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=84103 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 84103 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 84103 ']' 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:24.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:24.624 13:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.624 [2024-07-15 13:11:21.274931] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:24.624 [2024-07-15 13:11:21.275037] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.883 [2024-07-15 13:11:21.412586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:24.883 [2024-07-15 13:11:21.511158] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.883 [2024-07-15 13:11:21.511257] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.883 [2024-07-15 13:11:21.511276] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.883 [2024-07-15 13:11:21.511289] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.883 [2024-07-15 13:11:21.511299] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.883 [2024-07-15 13:11:21.511455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.883 [2024-07-15 13:11:21.512136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.883 [2024-07-15 13:11:21.512182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.816 13:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:25.816 13:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:12:25.816 13:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:25.816 13:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:25.816 13:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.816 13:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.816 13:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:25.816 13:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:26.076 [2024-07-15 13:11:22.565288] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.076 13:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:26.403 13:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.403 [2024-07-15 13:11:23.107654] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.677 13:11:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:26.677 13:11:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:26.935 Malloc0 00:12:26.935 13:11:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:27.195 Delay0 00:12:27.195 13:11:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:27.454 13:11:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:27.719 NULL1 00:12:27.719 13:11:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:27.980 13:11:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=84234 00:12:27.980 13:11:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:27.980 13:11:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:27.980 13:11:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.357 Read completed with error (sct=0, sc=11) 00:12:29.357 13:11:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:29.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.615 13:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:29.615 13:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:29.615 true 00:12:29.615 13:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:29.615 13:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:30.547 13:11:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:30.804 13:11:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:30.804 13:11:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:31.079 true 00:12:31.079 13:11:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:31.079 13:11:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.337 13:11:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:31.596 13:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:31.597 13:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:31.597 true 00:12:31.597 13:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:31.597 13:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.530 13:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:32.788 13:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:32.788 13:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:33.046 true 00:12:33.046 13:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:33.046 13:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:33.304 13:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.562 13:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:33.562 13:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:33.820 true 00:12:33.820 13:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:33.820 13:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.078 13:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:34.337 13:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:34.337 13:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:34.596 true 00:12:34.596 13:11:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:34.596 13:11:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:35.529 13:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:35.787 13:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:35.787 13:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:36.081 true 00:12:36.081 13:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:36.081 13:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.374 13:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:36.640 13:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:36.640 13:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:36.899 true 00:12:36.899 13:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:36.899 13:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.156 13:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.413 13:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:37.413 13:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:37.670 true 00:12:37.670 13:11:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:37.670 13:11:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:38.605 13:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:38.863 13:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:38.863 13:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:39.121 true 00:12:39.121 13:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:39.121 13:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.379 13:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.638 13:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:39.638 13:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:39.896 true 00:12:39.896 13:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:39.896 13:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.154 13:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.412 13:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:40.412 13:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:40.669 true 00:12:40.669 13:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:40.669 13:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.604 13:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.862 13:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:41.862 13:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:42.121 true 00:12:42.121 13:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:42.121 13:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.378 13:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.636 13:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:42.636 13:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:42.894 true 00:12:42.894 13:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:42.894 13:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.152 13:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.410 13:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:43.410 13:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:43.668 true 00:12:43.668 13:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:43.668 13:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.600 13:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.859 13:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:44.859 13:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:45.117 true 00:12:45.117 13:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:45.117 13:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.374 13:11:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.667 13:11:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:45.667 13:11:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:45.925 true 00:12:45.925 13:11:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:45.925 13:11:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.206 13:11:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.465 13:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:46.465 13:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:46.722 true 00:12:46.722 13:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:46.722 13:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.655 13:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.913 13:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:47.914 13:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:48.479 true 00:12:48.479 13:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:48.479 13:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.853 13:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.853 13:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:49.853 13:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:50.111 true 00:12:50.111 13:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:50.111 13:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.042 13:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:51.300 13:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:51.300 13:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:51.557 true 00:12:51.557 13:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:51.557 13:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.815 13:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.072 13:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:52.072 13:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:52.329 true 00:12:52.329 13:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:52.329 13:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.586 13:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.847 13:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:52.847 13:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:53.104 true 00:12:53.104 13:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:53.104 13:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.038 13:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.295 13:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:54.295 13:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:54.295 true 00:12:54.553 13:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:54.553 13:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.553 13:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.811 13:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:54.811 13:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:55.069 true 00:12:55.069 13:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:55.069 13:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.002 13:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.260 13:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:12:56.260 13:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:56.570 true 00:12:56.570 13:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:56.570 13:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.827 13:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.085 13:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:12:57.085 13:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:57.342 true 00:12:57.342 13:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:57.342 13:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.599 13:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.599 13:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:12:57.599 13:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:57.857 true 00:12:57.857 13:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:57.857 13:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.227 Initializing NVMe Controllers 00:12:59.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:59.227 Controller IO queue size 128, less than required. 00:12:59.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:59.227 Controller IO queue size 128, less than required. 00:12:59.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:59.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:59.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:59.227 Initialization complete. Launching workers. 00:12:59.227 ======================================================== 00:12:59.227 Latency(us) 00:12:59.227 Device Information : IOPS MiB/s Average min max 00:12:59.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 570.43 0.28 106971.96 3563.14 1173654.82 00:12:59.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9170.97 4.48 13957.13 3375.70 634108.22 00:12:59.227 ======================================================== 00:12:59.227 Total : 9741.40 4.76 19403.86 3375.70 1173654.82 00:12:59.227 00:12:59.227 13:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.227 13:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:12:59.227 13:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:59.484 true 00:12:59.484 13:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84234 00:12:59.484 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (84234) - No such process 00:12:59.484 13:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 84234 00:12:59.484 13:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.741 13:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:59.998 13:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:12:59.998 13:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:12:59.998 13:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:12:59.998 13:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:59.998 13:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:00.255 null0 00:13:00.255 13:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:00.255 13:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:00.255 13:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:00.512 null1 00:13:00.512 13:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:00.512 13:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:00.512 13:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:00.770 null2 00:13:00.770 13:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:00.770 13:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:00.770 13:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:01.026 null3 00:13:01.026 13:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:01.027 13:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:01.027 13:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:01.283 null4 00:13:01.283 13:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:01.283 13:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:01.283 13:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:01.540 null5 00:13:01.540 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:01.540 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:01.540 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:01.797 null6 00:13:01.797 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:01.797 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:01.797 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:02.055 null7 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 85270 85271 85273 85275 85276 85279 85281 85284 00:13:02.055 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:02.056 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:02.056 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:02.056 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.056 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:02.313 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:02.313 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:02.313 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.313 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:02.313 13:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:02.313 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:02.572 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:02.572 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:02.572 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.572 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.572 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:02.572 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.572 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.572 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:02.572 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.572 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.572 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:02.831 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.831 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.831 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:02.831 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.831 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.831 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:02.831 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.831 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.831 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:02.831 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.831 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.831 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:02.831 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:02.831 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:02.831 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:02.831 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:02.831 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.831 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:03.089 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:03.089 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.089 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:03.089 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:03.089 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.089 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.090 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:03.090 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:03.090 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.090 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.090 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:03.090 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.090 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.090 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:03.348 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.348 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.348 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:03.348 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.348 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.348 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:03.348 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.348 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.348 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:03.348 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.348 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.348 13:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:03.348 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:03.348 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.348 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.348 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:03.609 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.609 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.609 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:03.609 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:03.609 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:03.609 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:03.609 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:03.609 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.609 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.609 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:03.867 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.867 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.867 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:03.867 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.867 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.867 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:03.867 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.867 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.867 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:03.867 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.867 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.867 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:03.867 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.867 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.867 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:03.867 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.867 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.867 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:03.868 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:03.868 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.868 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:04.125 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:04.125 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.125 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:04.125 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:04.125 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:04.125 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:04.125 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:04.383 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.383 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.383 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:04.383 13:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:04.383 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.383 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.383 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:04.383 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.383 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.383 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:04.383 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.383 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.383 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:04.383 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.383 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.383 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:04.383 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.383 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.383 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:04.641 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.641 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.641 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:04.641 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.641 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.641 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:04.641 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:04.641 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:04.641 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.641 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:04.641 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:04.899 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:04.899 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:04.899 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:04.899 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.899 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.899 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:04.899 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.899 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.899 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:04.899 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.899 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.899 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:04.899 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.899 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.899 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:04.899 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.899 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.899 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:05.156 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.157 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.157 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:05.157 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.157 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.157 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:05.157 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.157 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.157 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:05.157 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:05.157 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:05.157 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.414 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:05.415 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:05.415 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:05.415 13:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:05.415 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:05.415 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.415 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.415 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:05.415 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.415 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.415 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:05.673 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.673 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.673 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:05.673 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.673 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.673 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:05.673 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.673 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.673 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:05.673 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.673 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.673 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.673 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.673 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:05.673 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:05.673 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.673 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.673 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:05.931 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:05.931 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.931 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:05.931 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:05.931 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:05.931 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:05.931 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:05.931 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:06.188 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.188 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.188 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:06.188 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.188 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.189 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:06.189 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.189 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.189 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:06.189 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.189 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.189 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:06.189 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.189 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.189 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:06.189 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.189 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.189 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:06.189 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.189 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.189 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:06.447 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.447 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.447 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.447 13:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:06.447 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:06.447 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:06.447 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:06.447 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:06.706 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:06.706 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:06.706 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.706 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.706 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:06.706 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:06.706 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.706 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.706 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:06.706 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.706 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.706 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:06.706 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.706 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.706 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:06.964 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.964 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.964 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:06.964 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.964 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.964 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:06.964 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.964 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.964 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.964 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:06.964 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.964 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.964 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:06.964 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:06.964 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:07.222 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:07.222 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.222 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.222 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.222 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:07.222 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:07.222 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:07.222 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:07.222 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.222 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.222 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:07.222 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.222 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.222 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:07.480 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.480 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.480 13:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:07.480 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.480 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.480 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:07.480 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.480 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.480 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.480 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:07.480 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.480 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.480 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:07.480 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:07.737 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.737 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.737 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:07.737 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:07.737 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:07.737 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.737 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:07.737 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.737 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.737 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.737 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.737 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:07.996 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:07.996 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.996 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.996 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.996 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.996 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.996 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.996 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.996 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.996 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.996 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:08.254 rmmod nvme_tcp 00:13:08.254 rmmod nvme_fabrics 00:13:08.254 rmmod nvme_keyring 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 84103 ']' 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 84103 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 84103 ']' 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 84103 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84103 00:13:08.254 killing process with pid 84103 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84103' 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 84103 00:13:08.254 13:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 84103 00:13:08.512 13:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:08.512 13:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:08.512 13:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:08.512 13:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:08.512 13:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:08.512 13:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.512 13:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.512 13:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.512 13:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:08.512 00:13:08.512 real 0m44.375s 00:13:08.512 user 3m34.909s 00:13:08.512 sys 0m13.398s 00:13:08.512 13:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:08.512 13:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.512 ************************************ 00:13:08.512 END TEST nvmf_ns_hotplug_stress 00:13:08.513 ************************************ 00:13:08.513 13:12:05 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:08.513 13:12:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:08.513 13:12:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:08.513 13:12:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:08.513 ************************************ 00:13:08.513 START TEST nvmf_connect_stress 00:13:08.513 ************************************ 00:13:08.513 13:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:08.771 * Looking for test storage... 00:13:08.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:08.771 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:08.772 Cannot find device "nvmf_tgt_br" 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:08.772 Cannot find device "nvmf_tgt_br2" 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:08.772 Cannot find device "nvmf_tgt_br" 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:08.772 Cannot find device "nvmf_tgt_br2" 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:08.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:08.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:08.772 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:09.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:13:09.031 00:13:09.031 --- 10.0.0.2 ping statistics --- 00:13:09.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.031 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:09.031 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:09.031 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:13:09.031 00:13:09.031 --- 10.0.0.3 ping statistics --- 00:13:09.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.031 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:09.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:13:09.031 00:13:09.031 --- 10.0.0.1 ping statistics --- 00:13:09.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.031 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=86584 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 86584 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 86584 ']' 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:09.031 13:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.031 [2024-07-15 13:12:05.746553] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:09.031 [2024-07-15 13:12:05.746633] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.303 [2024-07-15 13:12:05.882777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:09.303 [2024-07-15 13:12:05.980877] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.303 [2024-07-15 13:12:05.981171] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.303 [2024-07-15 13:12:05.981291] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.303 [2024-07-15 13:12:05.981387] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.303 [2024-07-15 13:12:05.981483] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.303 [2024-07-15 13:12:05.981662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.303 [2024-07-15 13:12:05.982030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.303 [2024-07-15 13:12:05.982042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.295 [2024-07-15 13:12:06.744325] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.295 [2024-07-15 13:12:06.762359] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.295 NULL1 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=86637 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.295 13:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.553 13:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.553 13:12:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:10.553 13:12:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.553 13:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.553 13:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.811 13:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.811 13:12:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:10.811 13:12:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.811 13:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.811 13:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.375 13:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.375 13:12:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:11.375 13:12:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.375 13:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.375 13:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.632 13:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.632 13:12:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:11.632 13:12:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.632 13:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.632 13:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.890 13:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.890 13:12:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:11.890 13:12:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.890 13:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.890 13:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.147 13:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.147 13:12:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:12.147 13:12:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.147 13:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.147 13:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.403 13:12:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.403 13:12:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:12.403 13:12:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.403 13:12:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.403 13:12:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.982 13:12:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.982 13:12:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:12.982 13:12:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.982 13:12:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.982 13:12:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.245 13:12:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.245 13:12:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:13.245 13:12:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.245 13:12:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.245 13:12:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.502 13:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.502 13:12:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:13.502 13:12:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.502 13:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.502 13:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.759 13:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.759 13:12:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:13.759 13:12:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.759 13:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.759 13:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.017 13:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.017 13:12:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:14.017 13:12:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.017 13:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.017 13:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.349 13:12:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.349 13:12:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:14.349 13:12:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.349 13:12:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.349 13:12:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.617 13:12:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.618 13:12:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:14.618 13:12:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.618 13:12:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.618 13:12:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.183 13:12:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.183 13:12:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:15.183 13:12:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.183 13:12:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.183 13:12:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.440 13:12:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.440 13:12:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:15.440 13:12:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.440 13:12:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.440 13:12:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.698 13:12:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.698 13:12:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:15.698 13:12:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.698 13:12:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.698 13:12:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.955 13:12:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.955 13:12:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:15.955 13:12:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.955 13:12:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.955 13:12:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.521 13:12:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.521 13:12:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:16.521 13:12:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.521 13:12:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.521 13:12:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.778 13:12:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.778 13:12:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:16.778 13:12:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.778 13:12:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.778 13:12:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.036 13:12:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.036 13:12:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:17.036 13:12:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.036 13:12:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.036 13:12:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.293 13:12:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.293 13:12:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:17.293 13:12:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.293 13:12:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.293 13:12:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.550 13:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.550 13:12:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:17.550 13:12:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.550 13:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.550 13:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.115 13:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.115 13:12:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:18.115 13:12:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.115 13:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.115 13:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.373 13:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.373 13:12:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:18.373 13:12:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.373 13:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.373 13:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.631 13:12:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.631 13:12:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:18.631 13:12:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.631 13:12:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.631 13:12:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.889 13:12:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.889 13:12:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:18.889 13:12:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.889 13:12:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.889 13:12:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.146 13:12:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.146 13:12:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:19.146 13:12:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.146 13:12:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.146 13:12:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.728 13:12:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.728 13:12:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:19.728 13:12:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.728 13:12:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.728 13:12:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.985 13:12:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.985 13:12:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:19.985 13:12:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.985 13:12:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.985 13:12:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.296 13:12:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.296 13:12:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:20.296 13:12:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.296 13:12:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.296 13:12:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.296 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86637 00:13:20.568 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (86637) - No such process 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 86637 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:20.568 rmmod nvme_tcp 00:13:20.568 rmmod nvme_fabrics 00:13:20.568 rmmod nvme_keyring 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 86584 ']' 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 86584 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 86584 ']' 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 86584 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86584 00:13:20.568 killing process with pid 86584 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86584' 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 86584 00:13:20.568 13:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 86584 00:13:20.826 13:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:20.826 13:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:20.826 13:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:20.826 13:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:20.826 13:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:20.826 13:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.826 13:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.826 13:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.826 13:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:20.826 00:13:20.826 real 0m12.334s 00:13:20.826 user 0m41.117s 00:13:20.826 sys 0m3.116s 00:13:20.826 13:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:20.826 13:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.826 ************************************ 00:13:20.826 END TEST nvmf_connect_stress 00:13:20.826 ************************************ 00:13:21.085 13:12:17 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:21.085 13:12:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:21.085 13:12:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:21.085 13:12:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:21.085 ************************************ 00:13:21.085 START TEST nvmf_fused_ordering 00:13:21.085 ************************************ 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:21.085 * Looking for test storage... 00:13:21.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.085 13:12:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:21.086 Cannot find device "nvmf_tgt_br" 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:21.086 Cannot find device "nvmf_tgt_br2" 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:21.086 Cannot find device "nvmf_tgt_br" 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:21.086 Cannot find device "nvmf_tgt_br2" 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:21.086 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:21.345 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:21.345 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:21.345 13:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:21.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:13:21.345 00:13:21.345 --- 10.0.0.2 ping statistics --- 00:13:21.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.345 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:21.345 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:21.345 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:13:21.345 00:13:21.345 --- 10.0.0.3 ping statistics --- 00:13:21.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.345 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:21.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:13:21.345 00:13:21.345 --- 10.0.0.1 ping statistics --- 00:13:21.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.345 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=86960 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 86960 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 86960 ']' 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:21.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:21.345 13:12:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.614 [2024-07-15 13:12:18.099434] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:21.614 [2024-07-15 13:12:18.099546] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.614 [2024-07-15 13:12:18.239558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.614 [2024-07-15 13:12:18.337085] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.614 [2024-07-15 13:12:18.337148] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.614 [2024-07-15 13:12:18.337163] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.614 [2024-07-15 13:12:18.337174] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.614 [2024-07-15 13:12:18.337183] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.614 [2024-07-15 13:12:18.337231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:22.554 [2024-07-15 13:12:19.119851] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:22.554 [2024-07-15 13:12:19.135946] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:22.554 NULL1 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.554 13:12:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:22.554 [2024-07-15 13:12:19.183733] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:22.554 [2024-07-15 13:12:19.183775] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87010 ] 00:13:23.120 Attached to nqn.2016-06.io.spdk:cnode1 00:13:23.120 Namespace ID: 1 size: 1GB 00:13:23.120 fused_ordering(0) 00:13:23.120 fused_ordering(1) 00:13:23.120 fused_ordering(2) 00:13:23.120 fused_ordering(3) 00:13:23.120 fused_ordering(4) 00:13:23.120 fused_ordering(5) 00:13:23.120 fused_ordering(6) 00:13:23.120 fused_ordering(7) 00:13:23.120 fused_ordering(8) 00:13:23.120 fused_ordering(9) 00:13:23.120 fused_ordering(10) 00:13:23.120 fused_ordering(11) 00:13:23.120 fused_ordering(12) 00:13:23.120 fused_ordering(13) 00:13:23.120 fused_ordering(14) 00:13:23.120 fused_ordering(15) 00:13:23.120 fused_ordering(16) 00:13:23.120 fused_ordering(17) 00:13:23.120 fused_ordering(18) 00:13:23.120 fused_ordering(19) 00:13:23.120 fused_ordering(20) 00:13:23.120 fused_ordering(21) 00:13:23.120 fused_ordering(22) 00:13:23.120 fused_ordering(23) 00:13:23.120 fused_ordering(24) 00:13:23.120 fused_ordering(25) 00:13:23.120 fused_ordering(26) 00:13:23.120 fused_ordering(27) 00:13:23.120 fused_ordering(28) 00:13:23.120 fused_ordering(29) 00:13:23.120 fused_ordering(30) 00:13:23.120 fused_ordering(31) 00:13:23.120 fused_ordering(32) 00:13:23.120 fused_ordering(33) 00:13:23.120 fused_ordering(34) 00:13:23.120 fused_ordering(35) 00:13:23.120 fused_ordering(36) 00:13:23.120 fused_ordering(37) 00:13:23.120 fused_ordering(38) 00:13:23.120 fused_ordering(39) 00:13:23.120 fused_ordering(40) 00:13:23.120 fused_ordering(41) 00:13:23.120 fused_ordering(42) 00:13:23.120 fused_ordering(43) 00:13:23.120 fused_ordering(44) 00:13:23.120 fused_ordering(45) 00:13:23.120 fused_ordering(46) 00:13:23.120 fused_ordering(47) 00:13:23.120 fused_ordering(48) 00:13:23.120 fused_ordering(49) 00:13:23.120 fused_ordering(50) 00:13:23.120 fused_ordering(51) 00:13:23.120 fused_ordering(52) 00:13:23.120 fused_ordering(53) 00:13:23.120 fused_ordering(54) 00:13:23.120 fused_ordering(55) 00:13:23.120 fused_ordering(56) 00:13:23.120 fused_ordering(57) 00:13:23.120 fused_ordering(58) 00:13:23.120 fused_ordering(59) 00:13:23.120 fused_ordering(60) 00:13:23.120 fused_ordering(61) 00:13:23.120 fused_ordering(62) 00:13:23.120 fused_ordering(63) 00:13:23.120 fused_ordering(64) 00:13:23.120 fused_ordering(65) 00:13:23.120 fused_ordering(66) 00:13:23.120 fused_ordering(67) 00:13:23.120 fused_ordering(68) 00:13:23.120 fused_ordering(69) 00:13:23.120 fused_ordering(70) 00:13:23.120 fused_ordering(71) 00:13:23.120 fused_ordering(72) 00:13:23.120 fused_ordering(73) 00:13:23.120 fused_ordering(74) 00:13:23.120 fused_ordering(75) 00:13:23.120 fused_ordering(76) 00:13:23.120 fused_ordering(77) 00:13:23.120 fused_ordering(78) 00:13:23.120 fused_ordering(79) 00:13:23.120 fused_ordering(80) 00:13:23.120 fused_ordering(81) 00:13:23.120 fused_ordering(82) 00:13:23.120 fused_ordering(83) 00:13:23.120 fused_ordering(84) 00:13:23.120 fused_ordering(85) 00:13:23.120 fused_ordering(86) 00:13:23.120 fused_ordering(87) 00:13:23.120 fused_ordering(88) 00:13:23.120 fused_ordering(89) 00:13:23.120 fused_ordering(90) 00:13:23.120 fused_ordering(91) 00:13:23.120 fused_ordering(92) 00:13:23.120 fused_ordering(93) 00:13:23.120 fused_ordering(94) 00:13:23.120 fused_ordering(95) 00:13:23.120 fused_ordering(96) 00:13:23.120 fused_ordering(97) 00:13:23.120 fused_ordering(98) 00:13:23.120 fused_ordering(99) 00:13:23.120 fused_ordering(100) 00:13:23.120 fused_ordering(101) 00:13:23.120 fused_ordering(102) 00:13:23.120 fused_ordering(103) 00:13:23.120 fused_ordering(104) 00:13:23.120 fused_ordering(105) 00:13:23.120 fused_ordering(106) 00:13:23.120 fused_ordering(107) 00:13:23.120 fused_ordering(108) 00:13:23.120 fused_ordering(109) 00:13:23.120 fused_ordering(110) 00:13:23.120 fused_ordering(111) 00:13:23.120 fused_ordering(112) 00:13:23.120 fused_ordering(113) 00:13:23.120 fused_ordering(114) 00:13:23.120 fused_ordering(115) 00:13:23.120 fused_ordering(116) 00:13:23.120 fused_ordering(117) 00:13:23.120 fused_ordering(118) 00:13:23.120 fused_ordering(119) 00:13:23.120 fused_ordering(120) 00:13:23.120 fused_ordering(121) 00:13:23.120 fused_ordering(122) 00:13:23.120 fused_ordering(123) 00:13:23.120 fused_ordering(124) 00:13:23.120 fused_ordering(125) 00:13:23.120 fused_ordering(126) 00:13:23.120 fused_ordering(127) 00:13:23.120 fused_ordering(128) 00:13:23.120 fused_ordering(129) 00:13:23.120 fused_ordering(130) 00:13:23.120 fused_ordering(131) 00:13:23.120 fused_ordering(132) 00:13:23.120 fused_ordering(133) 00:13:23.120 fused_ordering(134) 00:13:23.120 fused_ordering(135) 00:13:23.120 fused_ordering(136) 00:13:23.120 fused_ordering(137) 00:13:23.120 fused_ordering(138) 00:13:23.120 fused_ordering(139) 00:13:23.120 fused_ordering(140) 00:13:23.120 fused_ordering(141) 00:13:23.120 fused_ordering(142) 00:13:23.120 fused_ordering(143) 00:13:23.120 fused_ordering(144) 00:13:23.120 fused_ordering(145) 00:13:23.120 fused_ordering(146) 00:13:23.120 fused_ordering(147) 00:13:23.120 fused_ordering(148) 00:13:23.120 fused_ordering(149) 00:13:23.120 fused_ordering(150) 00:13:23.120 fused_ordering(151) 00:13:23.120 fused_ordering(152) 00:13:23.120 fused_ordering(153) 00:13:23.120 fused_ordering(154) 00:13:23.120 fused_ordering(155) 00:13:23.120 fused_ordering(156) 00:13:23.120 fused_ordering(157) 00:13:23.120 fused_ordering(158) 00:13:23.120 fused_ordering(159) 00:13:23.120 fused_ordering(160) 00:13:23.120 fused_ordering(161) 00:13:23.120 fused_ordering(162) 00:13:23.120 fused_ordering(163) 00:13:23.120 fused_ordering(164) 00:13:23.120 fused_ordering(165) 00:13:23.120 fused_ordering(166) 00:13:23.120 fused_ordering(167) 00:13:23.120 fused_ordering(168) 00:13:23.120 fused_ordering(169) 00:13:23.120 fused_ordering(170) 00:13:23.120 fused_ordering(171) 00:13:23.120 fused_ordering(172) 00:13:23.120 fused_ordering(173) 00:13:23.120 fused_ordering(174) 00:13:23.120 fused_ordering(175) 00:13:23.120 fused_ordering(176) 00:13:23.120 fused_ordering(177) 00:13:23.120 fused_ordering(178) 00:13:23.120 fused_ordering(179) 00:13:23.120 fused_ordering(180) 00:13:23.120 fused_ordering(181) 00:13:23.120 fused_ordering(182) 00:13:23.120 fused_ordering(183) 00:13:23.120 fused_ordering(184) 00:13:23.120 fused_ordering(185) 00:13:23.120 fused_ordering(186) 00:13:23.120 fused_ordering(187) 00:13:23.120 fused_ordering(188) 00:13:23.120 fused_ordering(189) 00:13:23.120 fused_ordering(190) 00:13:23.120 fused_ordering(191) 00:13:23.120 fused_ordering(192) 00:13:23.120 fused_ordering(193) 00:13:23.120 fused_ordering(194) 00:13:23.120 fused_ordering(195) 00:13:23.120 fused_ordering(196) 00:13:23.120 fused_ordering(197) 00:13:23.120 fused_ordering(198) 00:13:23.120 fused_ordering(199) 00:13:23.120 fused_ordering(200) 00:13:23.120 fused_ordering(201) 00:13:23.120 fused_ordering(202) 00:13:23.120 fused_ordering(203) 00:13:23.120 fused_ordering(204) 00:13:23.120 fused_ordering(205) 00:13:23.379 fused_ordering(206) 00:13:23.379 fused_ordering(207) 00:13:23.379 fused_ordering(208) 00:13:23.379 fused_ordering(209) 00:13:23.379 fused_ordering(210) 00:13:23.379 fused_ordering(211) 00:13:23.379 fused_ordering(212) 00:13:23.379 fused_ordering(213) 00:13:23.379 fused_ordering(214) 00:13:23.379 fused_ordering(215) 00:13:23.379 fused_ordering(216) 00:13:23.379 fused_ordering(217) 00:13:23.379 fused_ordering(218) 00:13:23.379 fused_ordering(219) 00:13:23.379 fused_ordering(220) 00:13:23.379 fused_ordering(221) 00:13:23.379 fused_ordering(222) 00:13:23.379 fused_ordering(223) 00:13:23.379 fused_ordering(224) 00:13:23.379 fused_ordering(225) 00:13:23.379 fused_ordering(226) 00:13:23.379 fused_ordering(227) 00:13:23.379 fused_ordering(228) 00:13:23.379 fused_ordering(229) 00:13:23.379 fused_ordering(230) 00:13:23.379 fused_ordering(231) 00:13:23.379 fused_ordering(232) 00:13:23.379 fused_ordering(233) 00:13:23.379 fused_ordering(234) 00:13:23.379 fused_ordering(235) 00:13:23.379 fused_ordering(236) 00:13:23.379 fused_ordering(237) 00:13:23.379 fused_ordering(238) 00:13:23.379 fused_ordering(239) 00:13:23.379 fused_ordering(240) 00:13:23.379 fused_ordering(241) 00:13:23.379 fused_ordering(242) 00:13:23.379 fused_ordering(243) 00:13:23.379 fused_ordering(244) 00:13:23.379 fused_ordering(245) 00:13:23.379 fused_ordering(246) 00:13:23.379 fused_ordering(247) 00:13:23.379 fused_ordering(248) 00:13:23.379 fused_ordering(249) 00:13:23.379 fused_ordering(250) 00:13:23.379 fused_ordering(251) 00:13:23.379 fused_ordering(252) 00:13:23.379 fused_ordering(253) 00:13:23.379 fused_ordering(254) 00:13:23.379 fused_ordering(255) 00:13:23.379 fused_ordering(256) 00:13:23.379 fused_ordering(257) 00:13:23.379 fused_ordering(258) 00:13:23.379 fused_ordering(259) 00:13:23.379 fused_ordering(260) 00:13:23.379 fused_ordering(261) 00:13:23.379 fused_ordering(262) 00:13:23.379 fused_ordering(263) 00:13:23.379 fused_ordering(264) 00:13:23.379 fused_ordering(265) 00:13:23.379 fused_ordering(266) 00:13:23.379 fused_ordering(267) 00:13:23.379 fused_ordering(268) 00:13:23.379 fused_ordering(269) 00:13:23.379 fused_ordering(270) 00:13:23.379 fused_ordering(271) 00:13:23.379 fused_ordering(272) 00:13:23.379 fused_ordering(273) 00:13:23.379 fused_ordering(274) 00:13:23.379 fused_ordering(275) 00:13:23.379 fused_ordering(276) 00:13:23.379 fused_ordering(277) 00:13:23.379 fused_ordering(278) 00:13:23.379 fused_ordering(279) 00:13:23.379 fused_ordering(280) 00:13:23.379 fused_ordering(281) 00:13:23.379 fused_ordering(282) 00:13:23.379 fused_ordering(283) 00:13:23.379 fused_ordering(284) 00:13:23.379 fused_ordering(285) 00:13:23.379 fused_ordering(286) 00:13:23.379 fused_ordering(287) 00:13:23.379 fused_ordering(288) 00:13:23.379 fused_ordering(289) 00:13:23.379 fused_ordering(290) 00:13:23.379 fused_ordering(291) 00:13:23.379 fused_ordering(292) 00:13:23.379 fused_ordering(293) 00:13:23.379 fused_ordering(294) 00:13:23.379 fused_ordering(295) 00:13:23.379 fused_ordering(296) 00:13:23.379 fused_ordering(297) 00:13:23.379 fused_ordering(298) 00:13:23.379 fused_ordering(299) 00:13:23.379 fused_ordering(300) 00:13:23.379 fused_ordering(301) 00:13:23.379 fused_ordering(302) 00:13:23.379 fused_ordering(303) 00:13:23.379 fused_ordering(304) 00:13:23.379 fused_ordering(305) 00:13:23.379 fused_ordering(306) 00:13:23.379 fused_ordering(307) 00:13:23.379 fused_ordering(308) 00:13:23.379 fused_ordering(309) 00:13:23.379 fused_ordering(310) 00:13:23.379 fused_ordering(311) 00:13:23.379 fused_ordering(312) 00:13:23.379 fused_ordering(313) 00:13:23.379 fused_ordering(314) 00:13:23.379 fused_ordering(315) 00:13:23.379 fused_ordering(316) 00:13:23.379 fused_ordering(317) 00:13:23.379 fused_ordering(318) 00:13:23.379 fused_ordering(319) 00:13:23.379 fused_ordering(320) 00:13:23.379 fused_ordering(321) 00:13:23.379 fused_ordering(322) 00:13:23.379 fused_ordering(323) 00:13:23.379 fused_ordering(324) 00:13:23.379 fused_ordering(325) 00:13:23.379 fused_ordering(326) 00:13:23.379 fused_ordering(327) 00:13:23.380 fused_ordering(328) 00:13:23.380 fused_ordering(329) 00:13:23.380 fused_ordering(330) 00:13:23.380 fused_ordering(331) 00:13:23.380 fused_ordering(332) 00:13:23.380 fused_ordering(333) 00:13:23.380 fused_ordering(334) 00:13:23.380 fused_ordering(335) 00:13:23.380 fused_ordering(336) 00:13:23.380 fused_ordering(337) 00:13:23.380 fused_ordering(338) 00:13:23.380 fused_ordering(339) 00:13:23.380 fused_ordering(340) 00:13:23.380 fused_ordering(341) 00:13:23.380 fused_ordering(342) 00:13:23.380 fused_ordering(343) 00:13:23.380 fused_ordering(344) 00:13:23.380 fused_ordering(345) 00:13:23.380 fused_ordering(346) 00:13:23.380 fused_ordering(347) 00:13:23.380 fused_ordering(348) 00:13:23.380 fused_ordering(349) 00:13:23.380 fused_ordering(350) 00:13:23.380 fused_ordering(351) 00:13:23.380 fused_ordering(352) 00:13:23.380 fused_ordering(353) 00:13:23.380 fused_ordering(354) 00:13:23.380 fused_ordering(355) 00:13:23.380 fused_ordering(356) 00:13:23.380 fused_ordering(357) 00:13:23.380 fused_ordering(358) 00:13:23.380 fused_ordering(359) 00:13:23.380 fused_ordering(360) 00:13:23.380 fused_ordering(361) 00:13:23.380 fused_ordering(362) 00:13:23.380 fused_ordering(363) 00:13:23.380 fused_ordering(364) 00:13:23.380 fused_ordering(365) 00:13:23.380 fused_ordering(366) 00:13:23.380 fused_ordering(367) 00:13:23.380 fused_ordering(368) 00:13:23.380 fused_ordering(369) 00:13:23.380 fused_ordering(370) 00:13:23.380 fused_ordering(371) 00:13:23.380 fused_ordering(372) 00:13:23.380 fused_ordering(373) 00:13:23.380 fused_ordering(374) 00:13:23.380 fused_ordering(375) 00:13:23.380 fused_ordering(376) 00:13:23.380 fused_ordering(377) 00:13:23.380 fused_ordering(378) 00:13:23.380 fused_ordering(379) 00:13:23.380 fused_ordering(380) 00:13:23.380 fused_ordering(381) 00:13:23.380 fused_ordering(382) 00:13:23.380 fused_ordering(383) 00:13:23.380 fused_ordering(384) 00:13:23.380 fused_ordering(385) 00:13:23.380 fused_ordering(386) 00:13:23.380 fused_ordering(387) 00:13:23.380 fused_ordering(388) 00:13:23.380 fused_ordering(389) 00:13:23.380 fused_ordering(390) 00:13:23.380 fused_ordering(391) 00:13:23.380 fused_ordering(392) 00:13:23.380 fused_ordering(393) 00:13:23.380 fused_ordering(394) 00:13:23.380 fused_ordering(395) 00:13:23.380 fused_ordering(396) 00:13:23.380 fused_ordering(397) 00:13:23.380 fused_ordering(398) 00:13:23.380 fused_ordering(399) 00:13:23.380 fused_ordering(400) 00:13:23.380 fused_ordering(401) 00:13:23.380 fused_ordering(402) 00:13:23.380 fused_ordering(403) 00:13:23.380 fused_ordering(404) 00:13:23.380 fused_ordering(405) 00:13:23.380 fused_ordering(406) 00:13:23.380 fused_ordering(407) 00:13:23.380 fused_ordering(408) 00:13:23.380 fused_ordering(409) 00:13:23.380 fused_ordering(410) 00:13:23.638 fused_ordering(411) 00:13:23.638 fused_ordering(412) 00:13:23.638 fused_ordering(413) 00:13:23.638 fused_ordering(414) 00:13:23.638 fused_ordering(415) 00:13:23.638 fused_ordering(416) 00:13:23.638 fused_ordering(417) 00:13:23.638 fused_ordering(418) 00:13:23.638 fused_ordering(419) 00:13:23.638 fused_ordering(420) 00:13:23.638 fused_ordering(421) 00:13:23.638 fused_ordering(422) 00:13:23.638 fused_ordering(423) 00:13:23.638 fused_ordering(424) 00:13:23.638 fused_ordering(425) 00:13:23.638 fused_ordering(426) 00:13:23.638 fused_ordering(427) 00:13:23.638 fused_ordering(428) 00:13:23.638 fused_ordering(429) 00:13:23.638 fused_ordering(430) 00:13:23.638 fused_ordering(431) 00:13:23.638 fused_ordering(432) 00:13:23.638 fused_ordering(433) 00:13:23.638 fused_ordering(434) 00:13:23.638 fused_ordering(435) 00:13:23.638 fused_ordering(436) 00:13:23.638 fused_ordering(437) 00:13:23.638 fused_ordering(438) 00:13:23.638 fused_ordering(439) 00:13:23.638 fused_ordering(440) 00:13:23.638 fused_ordering(441) 00:13:23.638 fused_ordering(442) 00:13:23.639 fused_ordering(443) 00:13:23.639 fused_ordering(444) 00:13:23.639 fused_ordering(445) 00:13:23.639 fused_ordering(446) 00:13:23.639 fused_ordering(447) 00:13:23.639 fused_ordering(448) 00:13:23.639 fused_ordering(449) 00:13:23.639 fused_ordering(450) 00:13:23.639 fused_ordering(451) 00:13:23.639 fused_ordering(452) 00:13:23.639 fused_ordering(453) 00:13:23.639 fused_ordering(454) 00:13:23.639 fused_ordering(455) 00:13:23.639 fused_ordering(456) 00:13:23.639 fused_ordering(457) 00:13:23.639 fused_ordering(458) 00:13:23.639 fused_ordering(459) 00:13:23.639 fused_ordering(460) 00:13:23.639 fused_ordering(461) 00:13:23.639 fused_ordering(462) 00:13:23.639 fused_ordering(463) 00:13:23.639 fused_ordering(464) 00:13:23.639 fused_ordering(465) 00:13:23.639 fused_ordering(466) 00:13:23.639 fused_ordering(467) 00:13:23.639 fused_ordering(468) 00:13:23.639 fused_ordering(469) 00:13:23.639 fused_ordering(470) 00:13:23.639 fused_ordering(471) 00:13:23.639 fused_ordering(472) 00:13:23.639 fused_ordering(473) 00:13:23.639 fused_ordering(474) 00:13:23.639 fused_ordering(475) 00:13:23.639 fused_ordering(476) 00:13:23.639 fused_ordering(477) 00:13:23.639 fused_ordering(478) 00:13:23.639 fused_ordering(479) 00:13:23.639 fused_ordering(480) 00:13:23.639 fused_ordering(481) 00:13:23.639 fused_ordering(482) 00:13:23.639 fused_ordering(483) 00:13:23.639 fused_ordering(484) 00:13:23.639 fused_ordering(485) 00:13:23.639 fused_ordering(486) 00:13:23.639 fused_ordering(487) 00:13:23.639 fused_ordering(488) 00:13:23.639 fused_ordering(489) 00:13:23.639 fused_ordering(490) 00:13:23.639 fused_ordering(491) 00:13:23.639 fused_ordering(492) 00:13:23.639 fused_ordering(493) 00:13:23.639 fused_ordering(494) 00:13:23.639 fused_ordering(495) 00:13:23.639 fused_ordering(496) 00:13:23.639 fused_ordering(497) 00:13:23.639 fused_ordering(498) 00:13:23.639 fused_ordering(499) 00:13:23.639 fused_ordering(500) 00:13:23.639 fused_ordering(501) 00:13:23.639 fused_ordering(502) 00:13:23.639 fused_ordering(503) 00:13:23.639 fused_ordering(504) 00:13:23.639 fused_ordering(505) 00:13:23.639 fused_ordering(506) 00:13:23.639 fused_ordering(507) 00:13:23.639 fused_ordering(508) 00:13:23.639 fused_ordering(509) 00:13:23.639 fused_ordering(510) 00:13:23.639 fused_ordering(511) 00:13:23.639 fused_ordering(512) 00:13:23.639 fused_ordering(513) 00:13:23.639 fused_ordering(514) 00:13:23.639 fused_ordering(515) 00:13:23.639 fused_ordering(516) 00:13:23.639 fused_ordering(517) 00:13:23.639 fused_ordering(518) 00:13:23.639 fused_ordering(519) 00:13:23.639 fused_ordering(520) 00:13:23.639 fused_ordering(521) 00:13:23.639 fused_ordering(522) 00:13:23.639 fused_ordering(523) 00:13:23.639 fused_ordering(524) 00:13:23.639 fused_ordering(525) 00:13:23.639 fused_ordering(526) 00:13:23.639 fused_ordering(527) 00:13:23.639 fused_ordering(528) 00:13:23.639 fused_ordering(529) 00:13:23.639 fused_ordering(530) 00:13:23.639 fused_ordering(531) 00:13:23.639 fused_ordering(532) 00:13:23.639 fused_ordering(533) 00:13:23.639 fused_ordering(534) 00:13:23.639 fused_ordering(535) 00:13:23.639 fused_ordering(536) 00:13:23.639 fused_ordering(537) 00:13:23.639 fused_ordering(538) 00:13:23.639 fused_ordering(539) 00:13:23.639 fused_ordering(540) 00:13:23.639 fused_ordering(541) 00:13:23.639 fused_ordering(542) 00:13:23.639 fused_ordering(543) 00:13:23.639 fused_ordering(544) 00:13:23.639 fused_ordering(545) 00:13:23.639 fused_ordering(546) 00:13:23.639 fused_ordering(547) 00:13:23.639 fused_ordering(548) 00:13:23.639 fused_ordering(549) 00:13:23.639 fused_ordering(550) 00:13:23.639 fused_ordering(551) 00:13:23.639 fused_ordering(552) 00:13:23.639 fused_ordering(553) 00:13:23.639 fused_ordering(554) 00:13:23.639 fused_ordering(555) 00:13:23.639 fused_ordering(556) 00:13:23.639 fused_ordering(557) 00:13:23.639 fused_ordering(558) 00:13:23.639 fused_ordering(559) 00:13:23.639 fused_ordering(560) 00:13:23.639 fused_ordering(561) 00:13:23.639 fused_ordering(562) 00:13:23.639 fused_ordering(563) 00:13:23.639 fused_ordering(564) 00:13:23.639 fused_ordering(565) 00:13:23.639 fused_ordering(566) 00:13:23.639 fused_ordering(567) 00:13:23.639 fused_ordering(568) 00:13:23.639 fused_ordering(569) 00:13:23.639 fused_ordering(570) 00:13:23.639 fused_ordering(571) 00:13:23.639 fused_ordering(572) 00:13:23.639 fused_ordering(573) 00:13:23.639 fused_ordering(574) 00:13:23.639 fused_ordering(575) 00:13:23.639 fused_ordering(576) 00:13:23.639 fused_ordering(577) 00:13:23.639 fused_ordering(578) 00:13:23.639 fused_ordering(579) 00:13:23.639 fused_ordering(580) 00:13:23.639 fused_ordering(581) 00:13:23.639 fused_ordering(582) 00:13:23.639 fused_ordering(583) 00:13:23.639 fused_ordering(584) 00:13:23.639 fused_ordering(585) 00:13:23.639 fused_ordering(586) 00:13:23.639 fused_ordering(587) 00:13:23.639 fused_ordering(588) 00:13:23.639 fused_ordering(589) 00:13:23.639 fused_ordering(590) 00:13:23.639 fused_ordering(591) 00:13:23.639 fused_ordering(592) 00:13:23.639 fused_ordering(593) 00:13:23.639 fused_ordering(594) 00:13:23.639 fused_ordering(595) 00:13:23.639 fused_ordering(596) 00:13:23.639 fused_ordering(597) 00:13:23.639 fused_ordering(598) 00:13:23.639 fused_ordering(599) 00:13:23.639 fused_ordering(600) 00:13:23.639 fused_ordering(601) 00:13:23.639 fused_ordering(602) 00:13:23.639 fused_ordering(603) 00:13:23.639 fused_ordering(604) 00:13:23.639 fused_ordering(605) 00:13:23.639 fused_ordering(606) 00:13:23.639 fused_ordering(607) 00:13:23.639 fused_ordering(608) 00:13:23.639 fused_ordering(609) 00:13:23.639 fused_ordering(610) 00:13:23.639 fused_ordering(611) 00:13:23.639 fused_ordering(612) 00:13:23.639 fused_ordering(613) 00:13:23.639 fused_ordering(614) 00:13:23.639 fused_ordering(615) 00:13:24.206 fused_ordering(616) 00:13:24.206 fused_ordering(617) 00:13:24.206 fused_ordering(618) 00:13:24.206 fused_ordering(619) 00:13:24.206 fused_ordering(620) 00:13:24.206 fused_ordering(621) 00:13:24.206 fused_ordering(622) 00:13:24.206 fused_ordering(623) 00:13:24.206 fused_ordering(624) 00:13:24.206 fused_ordering(625) 00:13:24.206 fused_ordering(626) 00:13:24.206 fused_ordering(627) 00:13:24.206 fused_ordering(628) 00:13:24.206 fused_ordering(629) 00:13:24.206 fused_ordering(630) 00:13:24.206 fused_ordering(631) 00:13:24.206 fused_ordering(632) 00:13:24.206 fused_ordering(633) 00:13:24.206 fused_ordering(634) 00:13:24.206 fused_ordering(635) 00:13:24.206 fused_ordering(636) 00:13:24.206 fused_ordering(637) 00:13:24.206 fused_ordering(638) 00:13:24.206 fused_ordering(639) 00:13:24.206 fused_ordering(640) 00:13:24.206 fused_ordering(641) 00:13:24.206 fused_ordering(642) 00:13:24.206 fused_ordering(643) 00:13:24.206 fused_ordering(644) 00:13:24.206 fused_ordering(645) 00:13:24.206 fused_ordering(646) 00:13:24.206 fused_ordering(647) 00:13:24.206 fused_ordering(648) 00:13:24.206 fused_ordering(649) 00:13:24.206 fused_ordering(650) 00:13:24.206 fused_ordering(651) 00:13:24.206 fused_ordering(652) 00:13:24.206 fused_ordering(653) 00:13:24.206 fused_ordering(654) 00:13:24.206 fused_ordering(655) 00:13:24.206 fused_ordering(656) 00:13:24.206 fused_ordering(657) 00:13:24.206 fused_ordering(658) 00:13:24.206 fused_ordering(659) 00:13:24.206 fused_ordering(660) 00:13:24.206 fused_ordering(661) 00:13:24.206 fused_ordering(662) 00:13:24.206 fused_ordering(663) 00:13:24.206 fused_ordering(664) 00:13:24.206 fused_ordering(665) 00:13:24.206 fused_ordering(666) 00:13:24.206 fused_ordering(667) 00:13:24.206 fused_ordering(668) 00:13:24.206 fused_ordering(669) 00:13:24.206 fused_ordering(670) 00:13:24.206 fused_ordering(671) 00:13:24.206 fused_ordering(672) 00:13:24.206 fused_ordering(673) 00:13:24.206 fused_ordering(674) 00:13:24.206 fused_ordering(675) 00:13:24.206 fused_ordering(676) 00:13:24.206 fused_ordering(677) 00:13:24.206 fused_ordering(678) 00:13:24.206 fused_ordering(679) 00:13:24.206 fused_ordering(680) 00:13:24.206 fused_ordering(681) 00:13:24.206 fused_ordering(682) 00:13:24.206 fused_ordering(683) 00:13:24.206 fused_ordering(684) 00:13:24.206 fused_ordering(685) 00:13:24.206 fused_ordering(686) 00:13:24.206 fused_ordering(687) 00:13:24.206 fused_ordering(688) 00:13:24.206 fused_ordering(689) 00:13:24.206 fused_ordering(690) 00:13:24.206 fused_ordering(691) 00:13:24.206 fused_ordering(692) 00:13:24.206 fused_ordering(693) 00:13:24.206 fused_ordering(694) 00:13:24.206 fused_ordering(695) 00:13:24.206 fused_ordering(696) 00:13:24.206 fused_ordering(697) 00:13:24.206 fused_ordering(698) 00:13:24.206 fused_ordering(699) 00:13:24.206 fused_ordering(700) 00:13:24.206 fused_ordering(701) 00:13:24.206 fused_ordering(702) 00:13:24.206 fused_ordering(703) 00:13:24.206 fused_ordering(704) 00:13:24.206 fused_ordering(705) 00:13:24.206 fused_ordering(706) 00:13:24.206 fused_ordering(707) 00:13:24.206 fused_ordering(708) 00:13:24.206 fused_ordering(709) 00:13:24.206 fused_ordering(710) 00:13:24.206 fused_ordering(711) 00:13:24.206 fused_ordering(712) 00:13:24.206 fused_ordering(713) 00:13:24.206 fused_ordering(714) 00:13:24.206 fused_ordering(715) 00:13:24.206 fused_ordering(716) 00:13:24.206 fused_ordering(717) 00:13:24.206 fused_ordering(718) 00:13:24.206 fused_ordering(719) 00:13:24.206 fused_ordering(720) 00:13:24.206 fused_ordering(721) 00:13:24.206 fused_ordering(722) 00:13:24.206 fused_ordering(723) 00:13:24.206 fused_ordering(724) 00:13:24.206 fused_ordering(725) 00:13:24.206 fused_ordering(726) 00:13:24.206 fused_ordering(727) 00:13:24.206 fused_ordering(728) 00:13:24.206 fused_ordering(729) 00:13:24.206 fused_ordering(730) 00:13:24.206 fused_ordering(731) 00:13:24.206 fused_ordering(732) 00:13:24.206 fused_ordering(733) 00:13:24.206 fused_ordering(734) 00:13:24.206 fused_ordering(735) 00:13:24.206 fused_ordering(736) 00:13:24.206 fused_ordering(737) 00:13:24.206 fused_ordering(738) 00:13:24.206 fused_ordering(739) 00:13:24.206 fused_ordering(740) 00:13:24.206 fused_ordering(741) 00:13:24.206 fused_ordering(742) 00:13:24.206 fused_ordering(743) 00:13:24.206 fused_ordering(744) 00:13:24.206 fused_ordering(745) 00:13:24.206 fused_ordering(746) 00:13:24.206 fused_ordering(747) 00:13:24.206 fused_ordering(748) 00:13:24.206 fused_ordering(749) 00:13:24.206 fused_ordering(750) 00:13:24.206 fused_ordering(751) 00:13:24.206 fused_ordering(752) 00:13:24.206 fused_ordering(753) 00:13:24.206 fused_ordering(754) 00:13:24.206 fused_ordering(755) 00:13:24.206 fused_ordering(756) 00:13:24.206 fused_ordering(757) 00:13:24.206 fused_ordering(758) 00:13:24.206 fused_ordering(759) 00:13:24.206 fused_ordering(760) 00:13:24.206 fused_ordering(761) 00:13:24.206 fused_ordering(762) 00:13:24.206 fused_ordering(763) 00:13:24.206 fused_ordering(764) 00:13:24.206 fused_ordering(765) 00:13:24.206 fused_ordering(766) 00:13:24.206 fused_ordering(767) 00:13:24.206 fused_ordering(768) 00:13:24.206 fused_ordering(769) 00:13:24.206 fused_ordering(770) 00:13:24.206 fused_ordering(771) 00:13:24.206 fused_ordering(772) 00:13:24.206 fused_ordering(773) 00:13:24.206 fused_ordering(774) 00:13:24.206 fused_ordering(775) 00:13:24.206 fused_ordering(776) 00:13:24.206 fused_ordering(777) 00:13:24.206 fused_ordering(778) 00:13:24.206 fused_ordering(779) 00:13:24.206 fused_ordering(780) 00:13:24.206 fused_ordering(781) 00:13:24.206 fused_ordering(782) 00:13:24.206 fused_ordering(783) 00:13:24.206 fused_ordering(784) 00:13:24.206 fused_ordering(785) 00:13:24.206 fused_ordering(786) 00:13:24.206 fused_ordering(787) 00:13:24.206 fused_ordering(788) 00:13:24.206 fused_ordering(789) 00:13:24.206 fused_ordering(790) 00:13:24.206 fused_ordering(791) 00:13:24.206 fused_ordering(792) 00:13:24.206 fused_ordering(793) 00:13:24.206 fused_ordering(794) 00:13:24.206 fused_ordering(795) 00:13:24.206 fused_ordering(796) 00:13:24.206 fused_ordering(797) 00:13:24.206 fused_ordering(798) 00:13:24.206 fused_ordering(799) 00:13:24.206 fused_ordering(800) 00:13:24.206 fused_ordering(801) 00:13:24.206 fused_ordering(802) 00:13:24.207 fused_ordering(803) 00:13:24.207 fused_ordering(804) 00:13:24.207 fused_ordering(805) 00:13:24.207 fused_ordering(806) 00:13:24.207 fused_ordering(807) 00:13:24.207 fused_ordering(808) 00:13:24.207 fused_ordering(809) 00:13:24.207 fused_ordering(810) 00:13:24.207 fused_ordering(811) 00:13:24.207 fused_ordering(812) 00:13:24.207 fused_ordering(813) 00:13:24.207 fused_ordering(814) 00:13:24.207 fused_ordering(815) 00:13:24.207 fused_ordering(816) 00:13:24.207 fused_ordering(817) 00:13:24.207 fused_ordering(818) 00:13:24.207 fused_ordering(819) 00:13:24.207 fused_ordering(820) 00:13:24.772 fused_ordering(821) 00:13:24.772 fused_ordering(822) 00:13:24.772 fused_ordering(823) 00:13:24.772 fused_ordering(824) 00:13:24.772 fused_ordering(825) 00:13:24.772 fused_ordering(826) 00:13:24.772 fused_ordering(827) 00:13:24.772 fused_ordering(828) 00:13:24.772 fused_ordering(829) 00:13:24.772 fused_ordering(830) 00:13:24.772 fused_ordering(831) 00:13:24.772 fused_ordering(832) 00:13:24.772 fused_ordering(833) 00:13:24.772 fused_ordering(834) 00:13:24.772 fused_ordering(835) 00:13:24.772 fused_ordering(836) 00:13:24.772 fused_ordering(837) 00:13:24.772 fused_ordering(838) 00:13:24.772 fused_ordering(839) 00:13:24.772 fused_ordering(840) 00:13:24.772 fused_ordering(841) 00:13:24.772 fused_ordering(842) 00:13:24.772 fused_ordering(843) 00:13:24.772 fused_ordering(844) 00:13:24.772 fused_ordering(845) 00:13:24.772 fused_ordering(846) 00:13:24.772 fused_ordering(847) 00:13:24.772 fused_ordering(848) 00:13:24.772 fused_ordering(849) 00:13:24.772 fused_ordering(850) 00:13:24.772 fused_ordering(851) 00:13:24.772 fused_ordering(852) 00:13:24.772 fused_ordering(853) 00:13:24.772 fused_ordering(854) 00:13:24.772 fused_ordering(855) 00:13:24.772 fused_ordering(856) 00:13:24.772 fused_ordering(857) 00:13:24.772 fused_ordering(858) 00:13:24.772 fused_ordering(859) 00:13:24.772 fused_ordering(860) 00:13:24.772 fused_ordering(861) 00:13:24.772 fused_ordering(862) 00:13:24.772 fused_ordering(863) 00:13:24.772 fused_ordering(864) 00:13:24.772 fused_ordering(865) 00:13:24.772 fused_ordering(866) 00:13:24.772 fused_ordering(867) 00:13:24.772 fused_ordering(868) 00:13:24.772 fused_ordering(869) 00:13:24.773 fused_ordering(870) 00:13:24.773 fused_ordering(871) 00:13:24.773 fused_ordering(872) 00:13:24.773 fused_ordering(873) 00:13:24.773 fused_ordering(874) 00:13:24.773 fused_ordering(875) 00:13:24.773 fused_ordering(876) 00:13:24.773 fused_ordering(877) 00:13:24.773 fused_ordering(878) 00:13:24.773 fused_ordering(879) 00:13:24.773 fused_ordering(880) 00:13:24.773 fused_ordering(881) 00:13:24.773 fused_ordering(882) 00:13:24.773 fused_ordering(883) 00:13:24.773 fused_ordering(884) 00:13:24.773 fused_ordering(885) 00:13:24.773 fused_ordering(886) 00:13:24.773 fused_ordering(887) 00:13:24.773 fused_ordering(888) 00:13:24.773 fused_ordering(889) 00:13:24.773 fused_ordering(890) 00:13:24.773 fused_ordering(891) 00:13:24.773 fused_ordering(892) 00:13:24.773 fused_ordering(893) 00:13:24.773 fused_ordering(894) 00:13:24.773 fused_ordering(895) 00:13:24.773 fused_ordering(896) 00:13:24.773 fused_ordering(897) 00:13:24.773 fused_ordering(898) 00:13:24.773 fused_ordering(899) 00:13:24.773 fused_ordering(900) 00:13:24.773 fused_ordering(901) 00:13:24.773 fused_ordering(902) 00:13:24.773 fused_ordering(903) 00:13:24.773 fused_ordering(904) 00:13:24.773 fused_ordering(905) 00:13:24.773 fused_ordering(906) 00:13:24.773 fused_ordering(907) 00:13:24.773 fused_ordering(908) 00:13:24.773 fused_ordering(909) 00:13:24.773 fused_ordering(910) 00:13:24.773 fused_ordering(911) 00:13:24.773 fused_ordering(912) 00:13:24.773 fused_ordering(913) 00:13:24.773 fused_ordering(914) 00:13:24.773 fused_ordering(915) 00:13:24.773 fused_ordering(916) 00:13:24.773 fused_ordering(917) 00:13:24.773 fused_ordering(918) 00:13:24.773 fused_ordering(919) 00:13:24.773 fused_ordering(920) 00:13:24.773 fused_ordering(921) 00:13:24.773 fused_ordering(922) 00:13:24.773 fused_ordering(923) 00:13:24.773 fused_ordering(924) 00:13:24.773 fused_ordering(925) 00:13:24.773 fused_ordering(926) 00:13:24.773 fused_ordering(927) 00:13:24.773 fused_ordering(928) 00:13:24.773 fused_ordering(929) 00:13:24.773 fused_ordering(930) 00:13:24.773 fused_ordering(931) 00:13:24.773 fused_ordering(932) 00:13:24.773 fused_ordering(933) 00:13:24.773 fused_ordering(934) 00:13:24.773 fused_ordering(935) 00:13:24.773 fused_ordering(936) 00:13:24.773 fused_ordering(937) 00:13:24.773 fused_ordering(938) 00:13:24.773 fused_ordering(939) 00:13:24.773 fused_ordering(940) 00:13:24.773 fused_ordering(941) 00:13:24.773 fused_ordering(942) 00:13:24.773 fused_ordering(943) 00:13:24.773 fused_ordering(944) 00:13:24.773 fused_ordering(945) 00:13:24.773 fused_ordering(946) 00:13:24.773 fused_ordering(947) 00:13:24.773 fused_ordering(948) 00:13:24.773 fused_ordering(949) 00:13:24.773 fused_ordering(950) 00:13:24.773 fused_ordering(951) 00:13:24.773 fused_ordering(952) 00:13:24.773 fused_ordering(953) 00:13:24.773 fused_ordering(954) 00:13:24.773 fused_ordering(955) 00:13:24.773 fused_ordering(956) 00:13:24.773 fused_ordering(957) 00:13:24.773 fused_ordering(958) 00:13:24.773 fused_ordering(959) 00:13:24.773 fused_ordering(960) 00:13:24.773 fused_ordering(961) 00:13:24.773 fused_ordering(962) 00:13:24.773 fused_ordering(963) 00:13:24.773 fused_ordering(964) 00:13:24.773 fused_ordering(965) 00:13:24.773 fused_ordering(966) 00:13:24.773 fused_ordering(967) 00:13:24.773 fused_ordering(968) 00:13:24.773 fused_ordering(969) 00:13:24.773 fused_ordering(970) 00:13:24.773 fused_ordering(971) 00:13:24.773 fused_ordering(972) 00:13:24.773 fused_ordering(973) 00:13:24.773 fused_ordering(974) 00:13:24.773 fused_ordering(975) 00:13:24.773 fused_ordering(976) 00:13:24.773 fused_ordering(977) 00:13:24.773 fused_ordering(978) 00:13:24.773 fused_ordering(979) 00:13:24.773 fused_ordering(980) 00:13:24.773 fused_ordering(981) 00:13:24.773 fused_ordering(982) 00:13:24.773 fused_ordering(983) 00:13:24.773 fused_ordering(984) 00:13:24.773 fused_ordering(985) 00:13:24.773 fused_ordering(986) 00:13:24.773 fused_ordering(987) 00:13:24.773 fused_ordering(988) 00:13:24.773 fused_ordering(989) 00:13:24.773 fused_ordering(990) 00:13:24.773 fused_ordering(991) 00:13:24.773 fused_ordering(992) 00:13:24.773 fused_ordering(993) 00:13:24.773 fused_ordering(994) 00:13:24.773 fused_ordering(995) 00:13:24.773 fused_ordering(996) 00:13:24.773 fused_ordering(997) 00:13:24.773 fused_ordering(998) 00:13:24.773 fused_ordering(999) 00:13:24.773 fused_ordering(1000) 00:13:24.773 fused_ordering(1001) 00:13:24.773 fused_ordering(1002) 00:13:24.773 fused_ordering(1003) 00:13:24.773 fused_ordering(1004) 00:13:24.773 fused_ordering(1005) 00:13:24.773 fused_ordering(1006) 00:13:24.773 fused_ordering(1007) 00:13:24.773 fused_ordering(1008) 00:13:24.773 fused_ordering(1009) 00:13:24.773 fused_ordering(1010) 00:13:24.773 fused_ordering(1011) 00:13:24.773 fused_ordering(1012) 00:13:24.773 fused_ordering(1013) 00:13:24.773 fused_ordering(1014) 00:13:24.773 fused_ordering(1015) 00:13:24.773 fused_ordering(1016) 00:13:24.773 fused_ordering(1017) 00:13:24.773 fused_ordering(1018) 00:13:24.773 fused_ordering(1019) 00:13:24.773 fused_ordering(1020) 00:13:24.773 fused_ordering(1021) 00:13:24.773 fused_ordering(1022) 00:13:24.773 fused_ordering(1023) 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:24.773 rmmod nvme_tcp 00:13:24.773 rmmod nvme_fabrics 00:13:24.773 rmmod nvme_keyring 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 86960 ']' 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 86960 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 86960 ']' 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 86960 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86960 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86960' 00:13:24.773 killing process with pid 86960 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 86960 00:13:24.773 13:12:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 86960 00:13:25.031 13:12:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:25.031 13:12:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:25.031 13:12:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:25.031 13:12:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:25.031 13:12:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:25.031 13:12:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.031 13:12:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.031 13:12:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.031 13:12:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:25.031 00:13:25.031 real 0m4.025s 00:13:25.031 user 0m4.890s 00:13:25.031 sys 0m1.295s 00:13:25.031 13:12:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:25.031 13:12:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:25.031 ************************************ 00:13:25.031 END TEST nvmf_fused_ordering 00:13:25.031 ************************************ 00:13:25.031 13:12:21 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:25.031 13:12:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:25.031 13:12:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:25.031 13:12:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:25.031 ************************************ 00:13:25.031 START TEST nvmf_delete_subsystem 00:13:25.031 ************************************ 00:13:25.031 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:25.031 * Looking for test storage... 00:13:25.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:25.031 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:25.031 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:13:25.031 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.031 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.031 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.031 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.031 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.031 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.031 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.031 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.031 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.031 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.031 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:13:25.031 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:13:25.031 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.031 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.031 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:25.031 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.031 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:25.289 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:25.290 Cannot find device "nvmf_tgt_br" 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:25.290 Cannot find device "nvmf_tgt_br2" 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:25.290 Cannot find device "nvmf_tgt_br" 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:25.290 Cannot find device "nvmf_tgt_br2" 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:25.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:25.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:25.290 13:12:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:25.290 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:25.290 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:25.290 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:25.290 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:25.290 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:25.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:13:25.548 00:13:25.548 --- 10.0.0.2 ping statistics --- 00:13:25.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.548 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:25.548 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:25.548 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:13:25.548 00:13:25.548 --- 10.0.0.3 ping statistics --- 00:13:25.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.548 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:25.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:13:25.548 00:13:25.548 --- 10.0.0.1 ping statistics --- 00:13:25.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.548 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=87221 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 87221 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 87221 ']' 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:25.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:25.548 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:25.548 [2024-07-15 13:12:22.200676] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:25.548 [2024-07-15 13:12:22.200777] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.806 [2024-07-15 13:12:22.336901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:25.806 [2024-07-15 13:12:22.440178] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.806 [2024-07-15 13:12:22.440250] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.806 [2024-07-15 13:12:22.440264] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.806 [2024-07-15 13:12:22.440275] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.806 [2024-07-15 13:12:22.440285] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.806 [2024-07-15 13:12:22.440391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.806 [2024-07-15 13:12:22.440406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.064 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:26.064 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:13:26.064 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:26.064 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:26.064 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:26.064 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.064 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:26.064 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.064 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:26.064 [2024-07-15 13:12:22.609502] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.064 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.064 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:26.064 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.064 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:26.064 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.064 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.064 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.064 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:26.064 [2024-07-15 13:12:22.631460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.064 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.064 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:26.064 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.064 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:26.064 NULL1 00:13:26.065 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.065 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:26.065 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.065 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:26.065 Delay0 00:13:26.065 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.065 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.065 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.065 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:26.065 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.065 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=87253 00:13:26.065 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:26.065 13:12:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:26.322 [2024-07-15 13:12:22.833851] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:28.218 13:12:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.218 13:12:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.218 13:12:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 [2024-07-15 13:12:24.871172] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4180 is same with the state(5) to be set 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 [2024-07-15 13:12:24.871918] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d7ad0 is same with the state(5) to be set 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 starting I/O failed: -6 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 [2024-07-15 13:12:24.873492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc0ac00c2f0 is same with the state(5) to be set 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Write completed with error (sct=0, sc=8) 00:13:28.218 Read completed with error (sct=0, sc=8) 00:13:28.219 Read completed with error (sct=0, sc=8) 00:13:28.219 Read completed with error (sct=0, sc=8) 00:13:28.219 Write completed with error (sct=0, sc=8) 00:13:28.219 Read completed with error (sct=0, sc=8) 00:13:28.219 Write completed with error (sct=0, sc=8) 00:13:28.219 Write completed with error (sct=0, sc=8) 00:13:28.219 Read completed with error (sct=0, sc=8) 00:13:28.219 Write completed with error (sct=0, sc=8) 00:13:28.219 Read completed with error (sct=0, sc=8) 00:13:28.219 Read completed with error (sct=0, sc=8) 00:13:28.219 Read completed with error (sct=0, sc=8) 00:13:28.219 Write completed with error (sct=0, sc=8) 00:13:28.219 Read completed with error (sct=0, sc=8) 00:13:28.219 Write completed with error (sct=0, sc=8) 00:13:28.219 Read completed with error (sct=0, sc=8) 00:13:28.219 Write completed with error (sct=0, sc=8) 00:13:29.194 [2024-07-15 13:12:25.847817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9be0 is same with the state(5) to be set 00:13:29.194 Read completed with error (sct=0, sc=8) 00:13:29.194 Read completed with error (sct=0, sc=8) 00:13:29.194 Write completed with error (sct=0, sc=8) 00:13:29.194 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 [2024-07-15 13:12:25.872174] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d76a0 is same with the state(5) to be set 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 [2024-07-15 13:12:25.873024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc0ac00bfe0 is same with the state(5) to be set 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 [2024-07-15 13:12:25.873460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d4360 is same with the state(5) to be set 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Read completed with error (sct=0, sc=8) 00:13:29.195 Write completed with error (sct=0, sc=8) 00:13:29.195 [2024-07-15 13:12:25.874472] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc0ac00c600 is same with the state(5) to be set 00:13:29.195 Initializing NVMe Controllers 00:13:29.195 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:29.195 Controller IO queue size 128, less than required. 00:13:29.195 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:29.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:29.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:29.195 Initialization complete. Launching workers. 00:13:29.195 ======================================================== 00:13:29.195 Latency(us) 00:13:29.195 Device Information : IOPS MiB/s Average min max 00:13:29.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.68 0.08 888688.66 697.40 1013206.48 00:13:29.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.28 0.08 917094.18 668.60 1014711.17 00:13:29.195 ======================================================== 00:13:29.195 Total : 333.96 0.16 902321.62 668.60 1014711.17 00:13:29.195 00:13:29.195 [2024-07-15 13:12:25.875084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d9be0 (9): Bad file descriptor 00:13:29.195 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:29.195 13:12:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.195 13:12:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:13:29.195 13:12:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 87253 00:13:29.195 13:12:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:29.761 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:29.761 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 87253 00:13:29.761 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (87253) - No such process 00:13:29.761 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 87253 00:13:29.761 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:13:29.761 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 87253 00:13:29.761 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:13:29.761 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:29.761 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:13:29.761 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:29.761 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 87253 00:13:29.761 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:13:29.761 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:29.761 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:29.762 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:29.762 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:29.762 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.762 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:29.762 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.762 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.762 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.762 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:29.762 [2024-07-15 13:12:26.398985] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.762 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.762 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.762 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.762 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:29.762 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.762 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=87299 00:13:29.762 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:29.762 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:13:29.762 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87299 00:13:29.762 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:30.020 [2024-07-15 13:12:26.568549] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:30.277 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:30.277 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87299 00:13:30.277 13:12:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:30.841 13:12:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:30.841 13:12:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87299 00:13:30.841 13:12:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:31.405 13:12:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:31.405 13:12:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87299 00:13:31.405 13:12:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:31.970 13:12:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:31.970 13:12:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87299 00:13:31.970 13:12:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:32.227 13:12:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:32.228 13:12:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87299 00:13:32.228 13:12:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:32.792 13:12:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:32.792 13:12:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87299 00:13:32.792 13:12:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:33.052 Initializing NVMe Controllers 00:13:33.052 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:33.052 Controller IO queue size 128, less than required. 00:13:33.052 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:33.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:33.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:33.052 Initialization complete. Launching workers. 00:13:33.052 ======================================================== 00:13:33.052 Latency(us) 00:13:33.052 Device Information : IOPS MiB/s Average min max 00:13:33.052 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003142.33 1000136.13 1010608.72 00:13:33.052 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005418.64 1000440.61 1041850.29 00:13:33.052 ======================================================== 00:13:33.052 Total : 256.00 0.12 1004280.48 1000136.13 1041850.29 00:13:33.052 00:13:33.316 13:12:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:33.316 13:12:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87299 00:13:33.316 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (87299) - No such process 00:13:33.316 13:12:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 87299 00:13:33.316 13:12:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:33.316 13:12:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:33.316 13:12:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:33.316 13:12:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:13:33.316 13:12:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:33.316 13:12:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:13:33.316 13:12:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:33.316 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:33.316 rmmod nvme_tcp 00:13:33.316 rmmod nvme_fabrics 00:13:33.316 rmmod nvme_keyring 00:13:33.316 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:33.316 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:13:33.316 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:13:33.316 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 87221 ']' 00:13:33.316 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 87221 00:13:33.316 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 87221 ']' 00:13:33.316 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 87221 00:13:33.573 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:13:33.573 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:33.573 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87221 00:13:33.573 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:33.573 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:33.573 killing process with pid 87221 00:13:33.573 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87221' 00:13:33.573 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 87221 00:13:33.573 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 87221 00:13:33.573 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:33.573 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:33.573 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:33.573 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:33.573 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:33.573 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.573 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.573 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.831 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:33.831 00:13:33.831 real 0m8.656s 00:13:33.831 user 0m27.290s 00:13:33.831 sys 0m1.511s 00:13:33.831 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:33.831 13:12:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:33.831 ************************************ 00:13:33.831 END TEST nvmf_delete_subsystem 00:13:33.831 ************************************ 00:13:33.831 13:12:30 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:33.831 13:12:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:33.831 13:12:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:33.831 13:12:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:33.831 ************************************ 00:13:33.831 START TEST nvmf_ns_masking 00:13:33.831 ************************************ 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:33.831 * Looking for test storage... 00:13:33.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=05a16b06-4f47-4245-ae98-4ecc5a2ed890 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:33.831 Cannot find device "nvmf_tgt_br" 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:33.831 Cannot find device "nvmf_tgt_br2" 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:13:33.831 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:33.832 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:33.832 Cannot find device "nvmf_tgt_br" 00:13:33.832 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:13:33.832 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:33.832 Cannot find device "nvmf_tgt_br2" 00:13:33.832 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:13:33.832 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:34.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:34.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:34.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:13:34.089 00:13:34.089 --- 10.0.0.2 ping statistics --- 00:13:34.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.089 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:34.089 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:34.089 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:13:34.089 00:13:34.089 --- 10.0.0.3 ping statistics --- 00:13:34.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.089 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:34.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:34.089 00:13:34.089 --- 10.0.0.1 ping statistics --- 00:13:34.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.089 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:34.089 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:34.347 13:12:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:13:34.347 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:34.347 13:12:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:34.347 13:12:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:34.347 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=87530 00:13:34.347 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:34.347 13:12:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 87530 00:13:34.347 13:12:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 87530 ']' 00:13:34.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.347 13:12:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.347 13:12:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:34.347 13:12:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.347 13:12:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:34.347 13:12:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:34.347 [2024-07-15 13:12:30.890647] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:34.347 [2024-07-15 13:12:30.890907] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.347 [2024-07-15 13:12:31.032887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:34.604 [2024-07-15 13:12:31.133285] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.604 [2024-07-15 13:12:31.133508] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.604 [2024-07-15 13:12:31.133661] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.604 [2024-07-15 13:12:31.133870] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.604 [2024-07-15 13:12:31.133885] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.604 [2024-07-15 13:12:31.134020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.604 [2024-07-15 13:12:31.134118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.604 [2024-07-15 13:12:31.138243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:34.604 [2024-07-15 13:12:31.138285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.538 13:12:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:35.538 13:12:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:13:35.538 13:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:35.538 13:12:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:35.538 13:12:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:35.538 13:12:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.538 13:12:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:35.538 [2024-07-15 13:12:32.257700] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.795 13:12:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:13:35.795 13:12:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:13:35.795 13:12:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:36.053 Malloc1 00:13:36.053 13:12:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:36.309 Malloc2 00:13:36.309 13:12:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:36.874 13:12:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:36.874 13:12:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.131 [2024-07-15 13:12:33.826540] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.131 13:12:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:13:37.131 13:12:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 05a16b06-4f47-4245-ae98-4ecc5a2ed890 -a 10.0.0.2 -s 4420 -i 4 00:13:37.389 13:12:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:13:37.389 13:12:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:13:37.389 13:12:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:37.389 13:12:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:37.389 13:12:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:13:39.288 13:12:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:39.288 13:12:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:39.288 13:12:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:39.288 13:12:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:39.288 13:12:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:39.288 13:12:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:13:39.288 13:12:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:39.288 13:12:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:39.546 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:39.546 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:39.546 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:13:39.546 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:39.546 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:39.546 [ 0]:0x1 00:13:39.546 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:39.546 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:39.546 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=11eca1f277ac4ce6ba9574110162bb1f 00:13:39.546 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 11eca1f277ac4ce6ba9574110162bb1f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:39.546 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:39.805 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:13:39.805 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:39.805 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:39.805 [ 0]:0x1 00:13:39.805 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:39.805 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:39.805 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=11eca1f277ac4ce6ba9574110162bb1f 00:13:39.805 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 11eca1f277ac4ce6ba9574110162bb1f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:39.805 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:13:39.805 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:39.805 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:39.805 [ 1]:0x2 00:13:39.805 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:39.805 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:39.805 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=14dcc449c6104e919bee6b2e694c4fed 00:13:39.805 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 14dcc449c6104e919bee6b2e694c4fed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:39.805 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:13:39.805 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:40.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.063 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.320 13:12:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:40.577 13:12:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:13:40.577 13:12:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 05a16b06-4f47-4245-ae98-4ecc5a2ed890 -a 10.0.0.2 -s 4420 -i 4 00:13:40.577 13:12:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:40.577 13:12:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:13:40.577 13:12:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:40.577 13:12:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:13:40.578 13:12:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:13:40.578 13:12:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:43.104 [ 0]:0x2 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=14dcc449c6104e919bee6b2e694c4fed 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 14dcc449c6104e919bee6b2e694c4fed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:43.104 [ 0]:0x1 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:43.104 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=11eca1f277ac4ce6ba9574110162bb1f 00:13:43.105 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 11eca1f277ac4ce6ba9574110162bb1f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.105 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:13:43.105 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:43.105 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:43.105 [ 1]:0x2 00:13:43.105 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:43.105 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:43.362 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=14dcc449c6104e919bee6b2e694c4fed 00:13:43.362 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 14dcc449c6104e919bee6b2e694c4fed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.362 13:12:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:43.620 [ 0]:0x2 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=14dcc449c6104e919bee6b2e694c4fed 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 14dcc449c6104e919bee6b2e694c4fed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:13:43.620 13:12:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:43.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.878 13:12:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:44.140 13:12:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:13:44.140 13:12:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 05a16b06-4f47-4245-ae98-4ecc5a2ed890 -a 10.0.0.2 -s 4420 -i 4 00:13:44.140 13:12:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:44.140 13:12:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:13:44.140 13:12:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:44.140 13:12:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:13:44.140 13:12:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:13:44.140 13:12:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:46.706 [ 0]:0x1 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=11eca1f277ac4ce6ba9574110162bb1f 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 11eca1f277ac4ce6ba9574110162bb1f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:46.706 [ 1]:0x2 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:46.706 13:12:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:46.706 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=14dcc449c6104e919bee6b2e694c4fed 00:13:46.706 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 14dcc449c6104e919bee6b2e694c4fed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.706 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:46.706 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:13:46.706 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:46.706 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:46.706 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:46.706 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:46.706 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:46.706 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:46.707 [ 0]:0x2 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=14dcc449c6104e919bee6b2e694c4fed 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 14dcc449c6104e919bee6b2e694c4fed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:46.707 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:46.964 [2024-07-15 13:12:43.638504] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:46.964 2024/07/15 13:12:43 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:13:46.964 request: 00:13:46.964 { 00:13:46.964 "method": "nvmf_ns_remove_host", 00:13:46.964 "params": { 00:13:46.964 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.964 "nsid": 2, 00:13:46.964 "host": "nqn.2016-06.io.spdk:host1" 00:13:46.964 } 00:13:46.964 } 00:13:46.964 Got JSON-RPC error response 00:13:46.964 GoRPCClient: error on JSON-RPC call 00:13:46.964 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:46.964 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:46.964 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:46.964 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:46.964 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:13:46.964 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:46.964 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:46.964 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:46.964 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:46.964 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:46.964 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:46.964 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:46.964 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:46.964 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:46.964 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:46.964 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:47.222 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:47.222 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:47.222 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:47.222 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:47.222 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:47.222 13:12:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:47.222 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:13:47.222 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:47.222 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:47.222 [ 0]:0x2 00:13:47.222 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:47.222 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:47.222 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=14dcc449c6104e919bee6b2e694c4fed 00:13:47.222 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 14dcc449c6104e919bee6b2e694c4fed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:47.222 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:13:47.222 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:47.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.222 13:12:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.481 13:12:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:13:47.481 13:12:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:13:47.481 13:12:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:47.481 13:12:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:13:47.481 13:12:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:47.481 13:12:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:13:47.481 13:12:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:47.481 13:12:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:47.481 rmmod nvme_tcp 00:13:47.481 rmmod nvme_fabrics 00:13:47.481 rmmod nvme_keyring 00:13:47.481 13:12:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:47.481 13:12:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:13:47.481 13:12:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:13:47.481 13:12:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 87530 ']' 00:13:47.481 13:12:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 87530 00:13:47.481 13:12:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 87530 ']' 00:13:47.481 13:12:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 87530 00:13:47.482 13:12:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:13:47.482 13:12:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:47.482 13:12:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87530 00:13:47.482 13:12:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:47.482 13:12:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:47.482 killing process with pid 87530 00:13:47.482 13:12:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87530' 00:13:47.482 13:12:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 87530 00:13:47.482 13:12:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 87530 00:13:48.047 13:12:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:48.047 13:12:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:48.047 13:12:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:48.047 13:12:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.047 13:12:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:48.047 13:12:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.047 13:12:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.047 13:12:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.047 13:12:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:48.047 00:13:48.047 real 0m14.144s 00:13:48.047 user 0m56.954s 00:13:48.047 sys 0m2.507s 00:13:48.047 13:12:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:48.047 13:12:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:48.047 ************************************ 00:13:48.047 END TEST nvmf_ns_masking 00:13:48.047 ************************************ 00:13:48.047 13:12:44 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:13:48.047 13:12:44 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:13:48.047 13:12:44 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:48.047 13:12:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:48.047 13:12:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:48.047 13:12:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:48.047 ************************************ 00:13:48.047 START TEST nvmf_host_management 00:13:48.047 ************************************ 00:13:48.047 13:12:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:48.047 * Looking for test storage... 00:13:48.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:48.047 13:12:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:48.047 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:48.047 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.047 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.047 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.047 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.047 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.047 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.047 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:48.048 Cannot find device "nvmf_tgt_br" 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:48.048 Cannot find device "nvmf_tgt_br2" 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:48.048 Cannot find device "nvmf_tgt_br" 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:48.048 Cannot find device "nvmf_tgt_br2" 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:48.048 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:48.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:48.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:48.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:13:48.306 00:13:48.306 --- 10.0.0.2 ping statistics --- 00:13:48.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.306 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:48.306 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:48.306 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:13:48.306 00:13:48.306 --- 10.0.0.3 ping statistics --- 00:13:48.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.306 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:48.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:13:48.306 00:13:48.306 --- 10.0.0.1 ping statistics --- 00:13:48.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.306 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:48.306 13:12:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:48.306 13:12:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:48.306 13:12:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:48.306 13:12:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:48.306 13:12:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:48.306 13:12:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:48.306 13:12:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:48.306 13:12:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=88119 00:13:48.306 13:12:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:48.307 13:12:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 88119 00:13:48.307 13:12:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 88119 ']' 00:13:48.307 13:12:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.307 13:12:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:48.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.307 13:12:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.307 13:12:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:48.307 13:12:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:48.564 [2024-07-15 13:12:45.075925] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:48.564 [2024-07-15 13:12:45.076040] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.564 [2024-07-15 13:12:45.217442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:48.821 [2024-07-15 13:12:45.313688] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.821 [2024-07-15 13:12:45.314146] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.821 [2024-07-15 13:12:45.314268] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.821 [2024-07-15 13:12:45.314356] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.821 [2024-07-15 13:12:45.314425] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.821 [2024-07-15 13:12:45.314607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.821 [2024-07-15 13:12:45.315005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.821 [2024-07-15 13:12:45.315083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:48.821 [2024-07-15 13:12:45.315196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.388 13:12:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:49.388 13:12:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:13:49.388 13:12:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:49.388 13:12:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:49.388 13:12:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.646 [2024-07-15 13:12:46.163923] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.646 Malloc0 00:13:49.646 [2024-07-15 13:12:46.246462] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=88191 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 88191 /var/tmp/bdevperf.sock 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 88191 ']' 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:49.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:49.646 { 00:13:49.646 "params": { 00:13:49.646 "name": "Nvme$subsystem", 00:13:49.646 "trtype": "$TEST_TRANSPORT", 00:13:49.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:49.646 "adrfam": "ipv4", 00:13:49.646 "trsvcid": "$NVMF_PORT", 00:13:49.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:49.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:49.646 "hdgst": ${hdgst:-false}, 00:13:49.646 "ddgst": ${ddgst:-false} 00:13:49.646 }, 00:13:49.646 "method": "bdev_nvme_attach_controller" 00:13:49.646 } 00:13:49.646 EOF 00:13:49.646 )") 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:49.646 13:12:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:49.646 "params": { 00:13:49.646 "name": "Nvme0", 00:13:49.646 "trtype": "tcp", 00:13:49.646 "traddr": "10.0.0.2", 00:13:49.646 "adrfam": "ipv4", 00:13:49.646 "trsvcid": "4420", 00:13:49.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:49.646 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:49.646 "hdgst": false, 00:13:49.646 "ddgst": false 00:13:49.646 }, 00:13:49.646 "method": "bdev_nvme_attach_controller" 00:13:49.646 }' 00:13:49.646 [2024-07-15 13:12:46.367543] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:49.646 [2024-07-15 13:12:46.367669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88191 ] 00:13:49.904 [2024-07-15 13:12:46.541234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.161 [2024-07-15 13:12:46.643222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.161 Running I/O for 10 seconds... 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.728 13:12:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.728 [2024-07-15 13:12:47.399375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f8430 is same with the state(5) to be set 00:13:50.728 [2024-07-15 13:12:47.399564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f8430 is same with the state(5) to be set 00:13:50.729 [2024-07-15 13:12:47.403811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.729 [2024-07-15 13:12:47.403857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.403872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.729 [2024-07-15 13:12:47.403882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.403893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.729 [2024-07-15 13:12:47.403903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.403919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.729 [2024-07-15 13:12:47.403928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.403938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0daf0 is same with the state(5) to be set 00:13:50.729 13:12:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.729 13:12:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:50.729 13:12:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.729 13:12:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.729 13:12:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.729 13:12:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:50.729 [2024-07-15 13:12:47.413555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.413595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.413618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.413629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.413640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.413650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.413661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.413670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.413682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.413691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.413702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.413711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.413722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.413731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.413742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.413751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.413761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.413770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.413781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.413790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.413800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.413809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.413820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.413829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.413849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.413859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.413870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.413879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.413890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.413899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.413910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.413918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.413929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.413937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.413948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.413957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.413969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.413978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.413989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.413999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.414010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.414018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.414030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.414038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.414049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.414058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.414069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.414078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.414089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.414098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.414118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.414138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.414149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.414159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.414171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.414180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.414196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.414214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.414227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.414237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.414248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.414258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.414269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.414278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.414289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.414299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.414310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.414319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.729 [2024-07-15 13:12:47.414330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.729 [2024-07-15 13:12:47.414339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.414934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.730 [2024-07-15 13:12:47.414943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.730 [2024-07-15 13:12:47.415026] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe0d550 was disconnected and freed. reset controller. 00:13:50.730 [2024-07-15 13:12:47.415101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0daf0 (9): Bad file descriptor 00:13:50.730 [2024-07-15 13:12:47.416201] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:50.730 task offset: 114688 on job bdev=Nvme0n1 fails 00:13:50.730 00:13:50.730 Latency(us) 00:13:50.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.730 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:50.730 Job: Nvme0n1 ended in about 0.60 seconds with error 00:13:50.730 Verification LBA range: start 0x0 length 0x400 00:13:50.730 Nvme0n1 : 0.60 1502.85 93.93 107.35 0.00 38655.01 1921.40 37176.79 00:13:50.730 =================================================================================================================== 00:13:50.730 Total : 1502.85 93.93 107.35 0.00 38655.01 1921.40 37176.79 00:13:50.730 [2024-07-15 13:12:47.418546] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:50.730 [2024-07-15 13:12:47.428972] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:52.104 13:12:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 88191 00:13:52.104 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (88191) - No such process 00:13:52.104 13:12:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:52.104 13:12:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:52.105 13:12:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:52.105 13:12:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:52.105 13:12:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:52.105 13:12:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:52.105 13:12:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:52.105 13:12:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:52.105 { 00:13:52.105 "params": { 00:13:52.105 "name": "Nvme$subsystem", 00:13:52.105 "trtype": "$TEST_TRANSPORT", 00:13:52.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:52.105 "adrfam": "ipv4", 00:13:52.105 "trsvcid": "$NVMF_PORT", 00:13:52.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:52.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:52.105 "hdgst": ${hdgst:-false}, 00:13:52.105 "ddgst": ${ddgst:-false} 00:13:52.105 }, 00:13:52.105 "method": "bdev_nvme_attach_controller" 00:13:52.105 } 00:13:52.105 EOF 00:13:52.105 )") 00:13:52.105 13:12:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:52.105 13:12:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:52.105 13:12:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:52.105 13:12:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:52.105 "params": { 00:13:52.105 "name": "Nvme0", 00:13:52.105 "trtype": "tcp", 00:13:52.105 "traddr": "10.0.0.2", 00:13:52.105 "adrfam": "ipv4", 00:13:52.105 "trsvcid": "4420", 00:13:52.105 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:52.105 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:52.105 "hdgst": false, 00:13:52.105 "ddgst": false 00:13:52.105 }, 00:13:52.105 "method": "bdev_nvme_attach_controller" 00:13:52.105 }' 00:13:52.105 [2024-07-15 13:12:48.461929] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:52.105 [2024-07-15 13:12:48.462028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88247 ] 00:13:52.105 [2024-07-15 13:12:48.600955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.105 [2024-07-15 13:12:48.696300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.363 Running I/O for 1 seconds... 00:13:53.297 00:13:53.297 Latency(us) 00:13:53.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.297 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:53.297 Verification LBA range: start 0x0 length 0x400 00:13:53.297 Nvme0n1 : 1.04 1541.70 96.36 0.00 0.00 40696.00 6642.97 37415.10 00:13:53.297 =================================================================================================================== 00:13:53.297 Total : 1541.70 96.36 0.00 0.00 40696.00 6642.97 37415.10 00:13:53.554 13:12:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:53.554 13:12:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:53.554 13:12:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:13:53.554 13:12:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:53.554 13:12:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:53.554 13:12:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:53.554 13:12:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:53.554 13:12:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:53.554 13:12:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:53.554 13:12:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:53.554 13:12:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:53.554 rmmod nvme_tcp 00:13:53.554 rmmod nvme_fabrics 00:13:53.554 rmmod nvme_keyring 00:13:53.554 13:12:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:53.554 13:12:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:53.554 13:12:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:53.554 13:12:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 88119 ']' 00:13:53.554 13:12:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 88119 00:13:53.555 13:12:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 88119 ']' 00:13:53.555 13:12:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 88119 00:13:53.555 13:12:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:13:53.555 13:12:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:53.555 13:12:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88119 00:13:53.555 13:12:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:53.555 13:12:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:53.555 13:12:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88119' 00:13:53.555 killing process with pid 88119 00:13:53.555 13:12:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 88119 00:13:53.555 13:12:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 88119 00:13:53.812 [2024-07-15 13:12:50.496910] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:53.812 13:12:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:53.812 13:12:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:53.812 13:12:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:53.812 13:12:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:53.812 13:12:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:53.812 13:12:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.812 13:12:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.812 13:12:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.071 13:12:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:54.071 13:12:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:54.071 ************************************ 00:13:54.071 END TEST nvmf_host_management 00:13:54.071 ************************************ 00:13:54.071 00:13:54.071 real 0m5.997s 00:13:54.071 user 0m23.496s 00:13:54.071 sys 0m1.482s 00:13:54.071 13:12:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:54.071 13:12:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:54.071 13:12:50 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:54.071 13:12:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:54.071 13:12:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:54.071 13:12:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:54.071 ************************************ 00:13:54.071 START TEST nvmf_lvol 00:13:54.071 ************************************ 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:54.071 * Looking for test storage... 00:13:54.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.071 13:12:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:54.072 Cannot find device "nvmf_tgt_br" 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:54.072 Cannot find device "nvmf_tgt_br2" 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:54.072 Cannot find device "nvmf_tgt_br" 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:54.072 Cannot find device "nvmf_tgt_br2" 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:13:54.072 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:54.330 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:54.330 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:54.330 13:12:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:54.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:13:54.330 00:13:54.330 --- 10.0.0.2 ping statistics --- 00:13:54.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.330 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:54.330 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:54.330 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:13:54.330 00:13:54.330 --- 10.0.0.3 ping statistics --- 00:13:54.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.330 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:54.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:13:54.330 00:13:54.330 --- 10.0.0.1 ping statistics --- 00:13:54.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.330 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=88456 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 88456 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 88456 ']' 00:13:54.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:54.330 13:12:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:54.588 [2024-07-15 13:12:51.140422] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:54.588 [2024-07-15 13:12:51.140588] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.588 [2024-07-15 13:12:51.283990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:54.846 [2024-07-15 13:12:51.378301] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.846 [2024-07-15 13:12:51.378606] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.846 [2024-07-15 13:12:51.378834] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.846 [2024-07-15 13:12:51.378975] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.846 [2024-07-15 13:12:51.379180] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.846 [2024-07-15 13:12:51.379340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.846 [2024-07-15 13:12:51.379491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.846 [2024-07-15 13:12:51.379497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.779 13:12:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:55.779 13:12:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:13:55.779 13:12:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:55.779 13:12:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:55.779 13:12:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:55.779 13:12:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.779 13:12:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:55.779 [2024-07-15 13:12:52.474288] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:55.779 13:12:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:56.037 13:12:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:56.037 13:12:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:56.604 13:12:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:56.604 13:12:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:56.863 13:12:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:57.121 13:12:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=14f1e752-a599-4049-aace-6e3047b1cb3e 00:13:57.121 13:12:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 14f1e752-a599-4049-aace-6e3047b1cb3e lvol 20 00:13:57.379 13:12:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=83b01d10-a0f5-469f-973e-2259393e3773 00:13:57.379 13:12:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:57.636 13:12:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 83b01d10-a0f5-469f-973e-2259393e3773 00:13:57.896 13:12:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:58.155 [2024-07-15 13:12:54.714436] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.155 13:12:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:58.412 13:12:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=88604 00:13:58.412 13:12:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:58.412 13:12:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:59.346 13:12:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 83b01d10-a0f5-469f-973e-2259393e3773 MY_SNAPSHOT 00:13:59.604 13:12:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=cf26fe7c-1664-4a9c-9008-a98c243eb39e 00:13:59.604 13:12:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 83b01d10-a0f5-469f-973e-2259393e3773 30 00:14:00.170 13:12:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone cf26fe7c-1664-4a9c-9008-a98c243eb39e MY_CLONE 00:14:00.429 13:12:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=771e4790-218d-4b11-a490-cb80c4762d9c 00:14:00.429 13:12:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 771e4790-218d-4b11-a490-cb80c4762d9c 00:14:00.994 13:12:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 88604 00:14:09.102 Initializing NVMe Controllers 00:14:09.102 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:09.102 Controller IO queue size 128, less than required. 00:14:09.102 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:09.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:09.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:09.102 Initialization complete. Launching workers. 00:14:09.102 ======================================================== 00:14:09.102 Latency(us) 00:14:09.102 Device Information : IOPS MiB/s Average min max 00:14:09.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10256.20 40.06 12484.42 2409.60 69039.25 00:14:09.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10305.40 40.26 12423.77 2529.01 65017.78 00:14:09.102 ======================================================== 00:14:09.102 Total : 20561.60 80.32 12454.02 2409.60 69039.25 00:14:09.102 00:14:09.102 13:13:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:09.102 13:13:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 83b01d10-a0f5-469f-973e-2259393e3773 00:14:09.102 13:13:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 14f1e752-a599-4049-aace-6e3047b1cb3e 00:14:09.360 13:13:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:09.360 13:13:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:09.360 13:13:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:09.360 13:13:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:09.360 13:13:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:09.618 13:13:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:09.618 13:13:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:09.618 13:13:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:09.618 13:13:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:09.618 rmmod nvme_tcp 00:14:09.618 rmmod nvme_fabrics 00:14:09.618 rmmod nvme_keyring 00:14:09.618 13:13:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:09.618 13:13:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:09.618 13:13:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:09.618 13:13:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 88456 ']' 00:14:09.618 13:13:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 88456 00:14:09.618 13:13:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 88456 ']' 00:14:09.618 13:13:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 88456 00:14:09.618 13:13:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:14:09.618 13:13:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:09.618 13:13:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88456 00:14:09.618 killing process with pid 88456 00:14:09.618 13:13:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:09.618 13:13:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:09.618 13:13:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88456' 00:14:09.618 13:13:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 88456 00:14:09.618 13:13:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 88456 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:09.876 00:14:09.876 real 0m15.861s 00:14:09.876 user 1m6.284s 00:14:09.876 sys 0m3.999s 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:09.876 ************************************ 00:14:09.876 END TEST nvmf_lvol 00:14:09.876 ************************************ 00:14:09.876 13:13:06 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:09.876 13:13:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:09.876 13:13:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:09.876 13:13:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:09.876 ************************************ 00:14:09.876 START TEST nvmf_lvs_grow 00:14:09.876 ************************************ 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:09.876 * Looking for test storage... 00:14:09.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.876 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:10.134 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:10.135 Cannot find device "nvmf_tgt_br" 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:10.135 Cannot find device "nvmf_tgt_br2" 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:10.135 Cannot find device "nvmf_tgt_br" 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:10.135 Cannot find device "nvmf_tgt_br2" 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:10.135 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:10.135 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:10.135 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:10.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:14:10.393 00:14:10.393 --- 10.0.0.2 ping statistics --- 00:14:10.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.393 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:10.393 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:10.393 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:14:10.393 00:14:10.393 --- 10.0.0.3 ping statistics --- 00:14:10.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.393 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:10.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:10.393 00:14:10.393 --- 10.0.0.1 ping statistics --- 00:14:10.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.393 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=88967 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 88967 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 88967 ']' 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:10.393 13:13:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:10.393 [2024-07-15 13:13:07.052470] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:10.393 [2024-07-15 13:13:07.052567] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.650 [2024-07-15 13:13:07.191531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.650 [2024-07-15 13:13:07.285129] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.650 [2024-07-15 13:13:07.285184] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.650 [2024-07-15 13:13:07.285196] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.650 [2024-07-15 13:13:07.285218] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.650 [2024-07-15 13:13:07.285228] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.650 [2024-07-15 13:13:07.285256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.582 13:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:11.582 13:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:14:11.582 13:13:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:11.582 13:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:11.582 13:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:11.582 13:13:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.582 13:13:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:11.840 [2024-07-15 13:13:08.355051] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.840 13:13:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:11.840 13:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:11.840 13:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:11.840 13:13:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:11.840 ************************************ 00:14:11.840 START TEST lvs_grow_clean 00:14:11.840 ************************************ 00:14:11.840 13:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:14:11.840 13:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:11.840 13:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:11.840 13:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:11.840 13:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:11.840 13:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:11.840 13:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:11.840 13:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:11.840 13:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:11.840 13:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:12.098 13:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:12.098 13:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:12.357 13:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ca3d94bc-35ed-4aa5-9e1b-97b897b40d35 00:14:12.357 13:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ca3d94bc-35ed-4aa5-9e1b-97b897b40d35 00:14:12.357 13:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:12.614 13:13:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:12.614 13:13:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:12.614 13:13:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ca3d94bc-35ed-4aa5-9e1b-97b897b40d35 lvol 150 00:14:12.870 13:13:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f2392d5c-c818-4619-821a-bb4f4abc344b 00:14:12.870 13:13:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:12.870 13:13:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:13.127 [2024-07-15 13:13:09.838073] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:13.127 [2024-07-15 13:13:09.838165] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:13.127 true 00:14:13.127 13:13:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ca3d94bc-35ed-4aa5-9e1b-97b897b40d35 00:14:13.127 13:13:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:13.692 13:13:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:13.693 13:13:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:13.693 13:13:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f2392d5c-c818-4619-821a-bb4f4abc344b 00:14:13.969 13:13:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:14.247 [2024-07-15 13:13:10.934691] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.247 13:13:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:14.505 13:13:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=89135 00:14:14.505 13:13:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:14.505 13:13:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:14.505 13:13:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 89135 /var/tmp/bdevperf.sock 00:14:14.505 13:13:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 89135 ']' 00:14:14.505 13:13:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.505 13:13:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:14.505 13:13:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.505 13:13:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:14.505 13:13:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:14.764 [2024-07-15 13:13:11.254409] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:14.764 [2024-07-15 13:13:11.254515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89135 ] 00:14:14.764 [2024-07-15 13:13:11.395226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.764 [2024-07-15 13:13:11.497144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.696 13:13:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:15.696 13:13:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:14:15.696 13:13:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:15.954 Nvme0n1 00:14:15.954 13:13:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:16.211 [ 00:14:16.211 { 00:14:16.211 "aliases": [ 00:14:16.211 "f2392d5c-c818-4619-821a-bb4f4abc344b" 00:14:16.211 ], 00:14:16.211 "assigned_rate_limits": { 00:14:16.211 "r_mbytes_per_sec": 0, 00:14:16.211 "rw_ios_per_sec": 0, 00:14:16.211 "rw_mbytes_per_sec": 0, 00:14:16.211 "w_mbytes_per_sec": 0 00:14:16.211 }, 00:14:16.211 "block_size": 4096, 00:14:16.211 "claimed": false, 00:14:16.211 "driver_specific": { 00:14:16.211 "mp_policy": "active_passive", 00:14:16.211 "nvme": [ 00:14:16.211 { 00:14:16.211 "ctrlr_data": { 00:14:16.211 "ana_reporting": false, 00:14:16.211 "cntlid": 1, 00:14:16.211 "firmware_revision": "24.05.1", 00:14:16.211 "model_number": "SPDK bdev Controller", 00:14:16.211 "multi_ctrlr": true, 00:14:16.212 "oacs": { 00:14:16.212 "firmware": 0, 00:14:16.212 "format": 0, 00:14:16.212 "ns_manage": 0, 00:14:16.212 "security": 0 00:14:16.212 }, 00:14:16.212 "serial_number": "SPDK0", 00:14:16.212 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:16.212 "vendor_id": "0x8086" 00:14:16.212 }, 00:14:16.212 "ns_data": { 00:14:16.212 "can_share": true, 00:14:16.212 "id": 1 00:14:16.212 }, 00:14:16.212 "trid": { 00:14:16.212 "adrfam": "IPv4", 00:14:16.212 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:16.212 "traddr": "10.0.0.2", 00:14:16.212 "trsvcid": "4420", 00:14:16.212 "trtype": "TCP" 00:14:16.212 }, 00:14:16.212 "vs": { 00:14:16.212 "nvme_version": "1.3" 00:14:16.212 } 00:14:16.212 } 00:14:16.212 ] 00:14:16.212 }, 00:14:16.212 "memory_domains": [ 00:14:16.212 { 00:14:16.212 "dma_device_id": "system", 00:14:16.212 "dma_device_type": 1 00:14:16.212 } 00:14:16.212 ], 00:14:16.212 "name": "Nvme0n1", 00:14:16.212 "num_blocks": 38912, 00:14:16.212 "product_name": "NVMe disk", 00:14:16.212 "supported_io_types": { 00:14:16.212 "abort": true, 00:14:16.212 "compare": true, 00:14:16.212 "compare_and_write": true, 00:14:16.212 "flush": true, 00:14:16.212 "nvme_admin": true, 00:14:16.212 "nvme_io": true, 00:14:16.212 "read": true, 00:14:16.212 "reset": true, 00:14:16.212 "unmap": true, 00:14:16.212 "write": true, 00:14:16.212 "write_zeroes": true 00:14:16.212 }, 00:14:16.212 "uuid": "f2392d5c-c818-4619-821a-bb4f4abc344b", 00:14:16.212 "zoned": false 00:14:16.212 } 00:14:16.212 ] 00:14:16.212 13:13:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=89177 00:14:16.212 13:13:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:16.212 13:13:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:16.212 Running I/O for 10 seconds... 00:14:17.589 Latency(us) 00:14:17.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:17.589 Nvme0n1 : 1.00 7092.00 27.70 0.00 0.00 0.00 0.00 0.00 00:14:17.589 =================================================================================================================== 00:14:17.589 Total : 7092.00 27.70 0.00 0.00 0.00 0.00 0.00 00:14:17.589 00:14:18.154 13:13:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ca3d94bc-35ed-4aa5-9e1b-97b897b40d35 00:14:18.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:18.412 Nvme0n1 : 2.00 6751.50 26.37 0.00 0.00 0.00 0.00 0.00 00:14:18.412 =================================================================================================================== 00:14:18.412 Total : 6751.50 26.37 0.00 0.00 0.00 0.00 0.00 00:14:18.412 00:14:18.670 true 00:14:18.670 13:13:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ca3d94bc-35ed-4aa5-9e1b-97b897b40d35 00:14:18.670 13:13:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:18.927 13:13:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:18.927 13:13:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:18.927 13:13:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 89177 00:14:19.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.491 Nvme0n1 : 3.00 6923.67 27.05 0.00 0.00 0.00 0.00 0.00 00:14:19.491 =================================================================================================================== 00:14:19.491 Total : 6923.67 27.05 0.00 0.00 0.00 0.00 0.00 00:14:19.491 00:14:20.475 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:20.475 Nvme0n1 : 4.00 7030.25 27.46 0.00 0.00 0.00 0.00 0.00 00:14:20.475 =================================================================================================================== 00:14:20.475 Total : 7030.25 27.46 0.00 0.00 0.00 0.00 0.00 00:14:20.475 00:14:21.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.422 Nvme0n1 : 5.00 7104.20 27.75 0.00 0.00 0.00 0.00 0.00 00:14:21.422 =================================================================================================================== 00:14:21.422 Total : 7104.20 27.75 0.00 0.00 0.00 0.00 0.00 00:14:21.422 00:14:22.354 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.354 Nvme0n1 : 6.00 7118.33 27.81 0.00 0.00 0.00 0.00 0.00 00:14:22.354 =================================================================================================================== 00:14:22.354 Total : 7118.33 27.81 0.00 0.00 0.00 0.00 0.00 00:14:22.354 00:14:23.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.286 Nvme0n1 : 7.00 7075.86 27.64 0.00 0.00 0.00 0.00 0.00 00:14:23.286 =================================================================================================================== 00:14:23.286 Total : 7075.86 27.64 0.00 0.00 0.00 0.00 0.00 00:14:23.286 00:14:24.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:24.218 Nvme0n1 : 8.00 7217.12 28.19 0.00 0.00 0.00 0.00 0.00 00:14:24.218 =================================================================================================================== 00:14:24.218 Total : 7217.12 28.19 0.00 0.00 0.00 0.00 0.00 00:14:24.218 00:14:25.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.591 Nvme0n1 : 9.00 7326.22 28.62 0.00 0.00 0.00 0.00 0.00 00:14:25.591 =================================================================================================================== 00:14:25.591 Total : 7326.22 28.62 0.00 0.00 0.00 0.00 0.00 00:14:25.591 00:14:26.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.225 Nvme0n1 : 10.00 7412.10 28.95 0.00 0.00 0.00 0.00 0.00 00:14:26.225 =================================================================================================================== 00:14:26.225 Total : 7412.10 28.95 0.00 0.00 0.00 0.00 0.00 00:14:26.225 00:14:26.225 00:14:26.225 Latency(us) 00:14:26.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.225 Nvme0n1 : 10.01 7420.12 28.98 0.00 0.00 17244.58 7745.16 52190.49 00:14:26.225 =================================================================================================================== 00:14:26.225 Total : 7420.12 28.98 0.00 0.00 17244.58 7745.16 52190.49 00:14:26.225 0 00:14:26.225 13:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 89135 00:14:26.225 13:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 89135 ']' 00:14:26.225 13:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 89135 00:14:26.226 13:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:14:26.226 13:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:26.226 13:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89135 00:14:26.484 13:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:26.484 13:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:26.484 killing process with pid 89135 00:14:26.484 13:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89135' 00:14:26.484 13:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 89135 00:14:26.484 Received shutdown signal, test time was about 10.000000 seconds 00:14:26.484 00:14:26.484 Latency(us) 00:14:26.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.484 =================================================================================================================== 00:14:26.484 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:26.484 13:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 89135 00:14:26.484 13:13:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:26.742 13:13:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:27.000 13:13:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:27.000 13:13:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ca3d94bc-35ed-4aa5-9e1b-97b897b40d35 00:14:27.564 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:27.564 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:27.564 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:27.564 [2024-07-15 13:13:24.278798] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:27.822 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ca3d94bc-35ed-4aa5-9e1b-97b897b40d35 00:14:27.822 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:27.822 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ca3d94bc-35ed-4aa5-9e1b-97b897b40d35 00:14:27.822 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:27.822 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.822 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:27.822 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.822 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:27.822 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.822 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:27.822 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:27.822 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ca3d94bc-35ed-4aa5-9e1b-97b897b40d35 00:14:28.080 2024/07/15 13:13:24 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:ca3d94bc-35ed-4aa5-9e1b-97b897b40d35], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:28.080 request: 00:14:28.080 { 00:14:28.080 "method": "bdev_lvol_get_lvstores", 00:14:28.080 "params": { 00:14:28.080 "uuid": "ca3d94bc-35ed-4aa5-9e1b-97b897b40d35" 00:14:28.080 } 00:14:28.080 } 00:14:28.080 Got JSON-RPC error response 00:14:28.080 GoRPCClient: error on JSON-RPC call 00:14:28.080 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:28.080 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:28.080 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:28.080 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:28.080 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:28.338 aio_bdev 00:14:28.338 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f2392d5c-c818-4619-821a-bb4f4abc344b 00:14:28.338 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=f2392d5c-c818-4619-821a-bb4f4abc344b 00:14:28.338 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:28.338 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:14:28.338 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:28.338 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:28.338 13:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:28.595 13:13:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f2392d5c-c818-4619-821a-bb4f4abc344b -t 2000 00:14:28.853 [ 00:14:28.853 { 00:14:28.853 "aliases": [ 00:14:28.853 "lvs/lvol" 00:14:28.853 ], 00:14:28.853 "assigned_rate_limits": { 00:14:28.853 "r_mbytes_per_sec": 0, 00:14:28.853 "rw_ios_per_sec": 0, 00:14:28.853 "rw_mbytes_per_sec": 0, 00:14:28.853 "w_mbytes_per_sec": 0 00:14:28.853 }, 00:14:28.853 "block_size": 4096, 00:14:28.853 "claimed": false, 00:14:28.853 "driver_specific": { 00:14:28.853 "lvol": { 00:14:28.853 "base_bdev": "aio_bdev", 00:14:28.853 "clone": false, 00:14:28.853 "esnap_clone": false, 00:14:28.853 "lvol_store_uuid": "ca3d94bc-35ed-4aa5-9e1b-97b897b40d35", 00:14:28.853 "num_allocated_clusters": 38, 00:14:28.853 "snapshot": false, 00:14:28.853 "thin_provision": false 00:14:28.853 } 00:14:28.853 }, 00:14:28.853 "name": "f2392d5c-c818-4619-821a-bb4f4abc344b", 00:14:28.853 "num_blocks": 38912, 00:14:28.853 "product_name": "Logical Volume", 00:14:28.853 "supported_io_types": { 00:14:28.853 "abort": false, 00:14:28.853 "compare": false, 00:14:28.853 "compare_and_write": false, 00:14:28.853 "flush": false, 00:14:28.853 "nvme_admin": false, 00:14:28.853 "nvme_io": false, 00:14:28.853 "read": true, 00:14:28.853 "reset": true, 00:14:28.853 "unmap": true, 00:14:28.853 "write": true, 00:14:28.853 "write_zeroes": true 00:14:28.853 }, 00:14:28.853 "uuid": "f2392d5c-c818-4619-821a-bb4f4abc344b", 00:14:28.853 "zoned": false 00:14:28.853 } 00:14:28.853 ] 00:14:28.853 13:13:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:14:28.853 13:13:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ca3d94bc-35ed-4aa5-9e1b-97b897b40d35 00:14:28.853 13:13:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:29.111 13:13:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:29.111 13:13:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ca3d94bc-35ed-4aa5-9e1b-97b897b40d35 00:14:29.111 13:13:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:29.369 13:13:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:29.369 13:13:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f2392d5c-c818-4619-821a-bb4f4abc344b 00:14:29.627 13:13:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ca3d94bc-35ed-4aa5-9e1b-97b897b40d35 00:14:29.884 13:13:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:30.140 13:13:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:30.730 ************************************ 00:14:30.730 END TEST lvs_grow_clean 00:14:30.730 ************************************ 00:14:30.730 00:14:30.730 real 0m18.776s 00:14:30.730 user 0m18.105s 00:14:30.730 sys 0m2.289s 00:14:30.730 13:13:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:30.730 13:13:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:30.730 13:13:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:30.730 13:13:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:30.730 13:13:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:30.730 13:13:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:30.730 ************************************ 00:14:30.730 START TEST lvs_grow_dirty 00:14:30.730 ************************************ 00:14:30.730 13:13:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:14:30.730 13:13:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:30.730 13:13:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:30.730 13:13:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:30.730 13:13:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:30.730 13:13:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:30.730 13:13:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:30.730 13:13:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:30.730 13:13:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:30.730 13:13:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:30.987 13:13:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:30.987 13:13:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:31.243 13:13:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fd279723-1178-46d9-9864-a2f358e06c5b 00:14:31.243 13:13:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:31.243 13:13:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd279723-1178-46d9-9864-a2f358e06c5b 00:14:31.501 13:13:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:31.501 13:13:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:31.501 13:13:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fd279723-1178-46d9-9864-a2f358e06c5b lvol 150 00:14:31.759 13:13:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1510acec-086a-464c-8c53-d6fa60f7546a 00:14:31.759 13:13:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:31.759 13:13:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:32.015 [2024-07-15 13:13:28.646099] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:32.015 [2024-07-15 13:13:28.646193] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:32.015 true 00:14:32.015 13:13:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:32.015 13:13:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd279723-1178-46d9-9864-a2f358e06c5b 00:14:32.272 13:13:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:32.272 13:13:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:32.529 13:13:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1510acec-086a-464c-8c53-d6fa60f7546a 00:14:32.785 13:13:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:33.042 [2024-07-15 13:13:29.658680] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.042 13:13:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:33.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:33.300 13:13:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=89581 00:14:33.300 13:13:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:33.300 13:13:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:33.300 13:13:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 89581 /var/tmp/bdevperf.sock 00:14:33.300 13:13:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 89581 ']' 00:14:33.300 13:13:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:33.300 13:13:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:33.300 13:13:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:33.300 13:13:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:33.300 13:13:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:33.300 [2024-07-15 13:13:29.969591] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:33.300 [2024-07-15 13:13:29.969695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89581 ] 00:14:33.558 [2024-07-15 13:13:30.108246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.558 [2024-07-15 13:13:30.211267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.490 13:13:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:34.490 13:13:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:14:34.490 13:13:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:34.747 Nvme0n1 00:14:34.747 13:13:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:35.005 [ 00:14:35.005 { 00:14:35.005 "aliases": [ 00:14:35.005 "1510acec-086a-464c-8c53-d6fa60f7546a" 00:14:35.005 ], 00:14:35.005 "assigned_rate_limits": { 00:14:35.005 "r_mbytes_per_sec": 0, 00:14:35.005 "rw_ios_per_sec": 0, 00:14:35.005 "rw_mbytes_per_sec": 0, 00:14:35.005 "w_mbytes_per_sec": 0 00:14:35.005 }, 00:14:35.005 "block_size": 4096, 00:14:35.005 "claimed": false, 00:14:35.005 "driver_specific": { 00:14:35.005 "mp_policy": "active_passive", 00:14:35.005 "nvme": [ 00:14:35.005 { 00:14:35.005 "ctrlr_data": { 00:14:35.005 "ana_reporting": false, 00:14:35.005 "cntlid": 1, 00:14:35.005 "firmware_revision": "24.05.1", 00:14:35.005 "model_number": "SPDK bdev Controller", 00:14:35.005 "multi_ctrlr": true, 00:14:35.005 "oacs": { 00:14:35.005 "firmware": 0, 00:14:35.005 "format": 0, 00:14:35.005 "ns_manage": 0, 00:14:35.005 "security": 0 00:14:35.005 }, 00:14:35.005 "serial_number": "SPDK0", 00:14:35.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:35.005 "vendor_id": "0x8086" 00:14:35.005 }, 00:14:35.005 "ns_data": { 00:14:35.005 "can_share": true, 00:14:35.005 "id": 1 00:14:35.005 }, 00:14:35.005 "trid": { 00:14:35.005 "adrfam": "IPv4", 00:14:35.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:35.005 "traddr": "10.0.0.2", 00:14:35.005 "trsvcid": "4420", 00:14:35.005 "trtype": "TCP" 00:14:35.005 }, 00:14:35.005 "vs": { 00:14:35.005 "nvme_version": "1.3" 00:14:35.005 } 00:14:35.005 } 00:14:35.005 ] 00:14:35.005 }, 00:14:35.005 "memory_domains": [ 00:14:35.005 { 00:14:35.005 "dma_device_id": "system", 00:14:35.005 "dma_device_type": 1 00:14:35.005 } 00:14:35.005 ], 00:14:35.005 "name": "Nvme0n1", 00:14:35.005 "num_blocks": 38912, 00:14:35.005 "product_name": "NVMe disk", 00:14:35.005 "supported_io_types": { 00:14:35.005 "abort": true, 00:14:35.005 "compare": true, 00:14:35.005 "compare_and_write": true, 00:14:35.005 "flush": true, 00:14:35.005 "nvme_admin": true, 00:14:35.005 "nvme_io": true, 00:14:35.005 "read": true, 00:14:35.005 "reset": true, 00:14:35.005 "unmap": true, 00:14:35.005 "write": true, 00:14:35.005 "write_zeroes": true 00:14:35.005 }, 00:14:35.005 "uuid": "1510acec-086a-464c-8c53-d6fa60f7546a", 00:14:35.005 "zoned": false 00:14:35.005 } 00:14:35.005 ] 00:14:35.005 13:13:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=89629 00:14:35.005 13:13:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:35.005 13:13:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:35.005 Running I/O for 10 seconds... 00:14:36.379 Latency(us) 00:14:36.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.379 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.379 Nvme0n1 : 1.00 8540.00 33.36 0.00 0.00 0.00 0.00 0.00 00:14:36.379 =================================================================================================================== 00:14:36.379 Total : 8540.00 33.36 0.00 0.00 0.00 0.00 0.00 00:14:36.379 00:14:36.946 13:13:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fd279723-1178-46d9-9864-a2f358e06c5b 00:14:37.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.203 Nvme0n1 : 2.00 8489.00 33.16 0.00 0.00 0.00 0.00 0.00 00:14:37.203 =================================================================================================================== 00:14:37.203 Total : 8489.00 33.16 0.00 0.00 0.00 0.00 0.00 00:14:37.203 00:14:37.203 true 00:14:37.203 13:13:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd279723-1178-46d9-9864-a2f358e06c5b 00:14:37.203 13:13:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:37.769 13:13:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:37.769 13:13:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:37.769 13:13:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 89629 00:14:38.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.026 Nvme0n1 : 3.00 8465.67 33.07 0.00 0.00 0.00 0.00 0.00 00:14:38.026 =================================================================================================================== 00:14:38.026 Total : 8465.67 33.07 0.00 0.00 0.00 0.00 0.00 00:14:38.026 00:14:38.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.959 Nvme0n1 : 4.00 8418.75 32.89 0.00 0.00 0.00 0.00 0.00 00:14:38.959 =================================================================================================================== 00:14:38.959 Total : 8418.75 32.89 0.00 0.00 0.00 0.00 0.00 00:14:38.959 00:14:40.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.340 Nvme0n1 : 5.00 8382.00 32.74 0.00 0.00 0.00 0.00 0.00 00:14:40.340 =================================================================================================================== 00:14:40.340 Total : 8382.00 32.74 0.00 0.00 0.00 0.00 0.00 00:14:40.340 00:14:41.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.274 Nvme0n1 : 6.00 7716.33 30.14 0.00 0.00 0.00 0.00 0.00 00:14:41.274 =================================================================================================================== 00:14:41.274 Total : 7716.33 30.14 0.00 0.00 0.00 0.00 0.00 00:14:41.274 00:14:42.209 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.209 Nvme0n1 : 7.00 7376.57 28.81 0.00 0.00 0.00 0.00 0.00 00:14:42.209 =================================================================================================================== 00:14:42.209 Total : 7376.57 28.81 0.00 0.00 0.00 0.00 0.00 00:14:42.209 00:14:43.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.142 Nvme0n1 : 8.00 7413.50 28.96 0.00 0.00 0.00 0.00 0.00 00:14:43.142 =================================================================================================================== 00:14:43.142 Total : 7413.50 28.96 0.00 0.00 0.00 0.00 0.00 00:14:43.142 00:14:44.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.071 Nvme0n1 : 9.00 7470.78 29.18 0.00 0.00 0.00 0.00 0.00 00:14:44.071 =================================================================================================================== 00:14:44.071 Total : 7470.78 29.18 0.00 0.00 0.00 0.00 0.00 00:14:44.071 00:14:45.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.007 Nvme0n1 : 10.00 7526.10 29.40 0.00 0.00 0.00 0.00 0.00 00:14:45.007 =================================================================================================================== 00:14:45.007 Total : 7526.10 29.40 0.00 0.00 0.00 0.00 0.00 00:14:45.007 00:14:45.007 00:14:45.007 Latency(us) 00:14:45.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.007 Nvme0n1 : 10.01 7530.73 29.42 0.00 0.00 16991.85 5868.45 678714.65 00:14:45.007 =================================================================================================================== 00:14:45.007 Total : 7530.73 29.42 0.00 0.00 16991.85 5868.45 678714.65 00:14:45.007 0 00:14:45.007 13:13:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 89581 00:14:45.007 13:13:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 89581 ']' 00:14:45.007 13:13:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 89581 00:14:45.007 13:13:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:14:45.007 13:13:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:45.007 13:13:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89581 00:14:45.007 13:13:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:45.007 13:13:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:45.007 killing process with pid 89581 00:14:45.007 13:13:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89581' 00:14:45.007 13:13:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 89581 00:14:45.007 Received shutdown signal, test time was about 10.000000 seconds 00:14:45.007 00:14:45.007 Latency(us) 00:14:45.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.007 =================================================================================================================== 00:14:45.007 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:45.007 13:13:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 89581 00:14:45.264 13:13:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:45.829 13:13:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:46.086 13:13:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd279723-1178-46d9-9864-a2f358e06c5b 00:14:46.087 13:13:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:46.343 13:13:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:46.343 13:13:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:46.343 13:13:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 88967 00:14:46.343 13:13:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 88967 00:14:46.343 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 88967 Killed "${NVMF_APP[@]}" "$@" 00:14:46.343 13:13:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:46.343 13:13:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:46.343 13:13:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:46.343 13:13:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:46.343 13:13:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:46.343 13:13:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=89793 00:14:46.343 13:13:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:46.343 13:13:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 89793 00:14:46.343 13:13:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 89793 ']' 00:14:46.343 13:13:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.343 13:13:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:46.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.343 13:13:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.343 13:13:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:46.343 13:13:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:46.343 [2024-07-15 13:13:42.973349] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:46.343 [2024-07-15 13:13:42.973431] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.605 [2024-07-15 13:13:43.111233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.605 [2024-07-15 13:13:43.207225] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.605 [2024-07-15 13:13:43.207274] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.605 [2024-07-15 13:13:43.207286] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.605 [2024-07-15 13:13:43.207294] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.605 [2024-07-15 13:13:43.207302] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.605 [2024-07-15 13:13:43.207327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.605 13:13:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:46.605 13:13:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:14:46.605 13:13:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:46.605 13:13:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:46.605 13:13:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:46.862 13:13:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.862 13:13:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:47.120 [2024-07-15 13:13:43.651166] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:47.120 [2024-07-15 13:13:43.651453] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:47.120 [2024-07-15 13:13:43.651668] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:47.120 13:13:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:47.120 13:13:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1510acec-086a-464c-8c53-d6fa60f7546a 00:14:47.120 13:13:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=1510acec-086a-464c-8c53-d6fa60f7546a 00:14:47.120 13:13:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:47.120 13:13:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:14:47.120 13:13:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:47.120 13:13:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:47.120 13:13:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:47.378 13:13:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1510acec-086a-464c-8c53-d6fa60f7546a -t 2000 00:14:47.635 [ 00:14:47.635 { 00:14:47.635 "aliases": [ 00:14:47.635 "lvs/lvol" 00:14:47.635 ], 00:14:47.635 "assigned_rate_limits": { 00:14:47.635 "r_mbytes_per_sec": 0, 00:14:47.636 "rw_ios_per_sec": 0, 00:14:47.636 "rw_mbytes_per_sec": 0, 00:14:47.636 "w_mbytes_per_sec": 0 00:14:47.636 }, 00:14:47.636 "block_size": 4096, 00:14:47.636 "claimed": false, 00:14:47.636 "driver_specific": { 00:14:47.636 "lvol": { 00:14:47.636 "base_bdev": "aio_bdev", 00:14:47.636 "clone": false, 00:14:47.636 "esnap_clone": false, 00:14:47.636 "lvol_store_uuid": "fd279723-1178-46d9-9864-a2f358e06c5b", 00:14:47.636 "num_allocated_clusters": 38, 00:14:47.636 "snapshot": false, 00:14:47.636 "thin_provision": false 00:14:47.636 } 00:14:47.636 }, 00:14:47.636 "name": "1510acec-086a-464c-8c53-d6fa60f7546a", 00:14:47.636 "num_blocks": 38912, 00:14:47.636 "product_name": "Logical Volume", 00:14:47.636 "supported_io_types": { 00:14:47.636 "abort": false, 00:14:47.636 "compare": false, 00:14:47.636 "compare_and_write": false, 00:14:47.636 "flush": false, 00:14:47.636 "nvme_admin": false, 00:14:47.636 "nvme_io": false, 00:14:47.636 "read": true, 00:14:47.636 "reset": true, 00:14:47.636 "unmap": true, 00:14:47.636 "write": true, 00:14:47.636 "write_zeroes": true 00:14:47.636 }, 00:14:47.636 "uuid": "1510acec-086a-464c-8c53-d6fa60f7546a", 00:14:47.636 "zoned": false 00:14:47.636 } 00:14:47.636 ] 00:14:47.636 13:13:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:14:47.636 13:13:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd279723-1178-46d9-9864-a2f358e06c5b 00:14:47.636 13:13:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:47.894 13:13:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:47.894 13:13:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:47.894 13:13:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd279723-1178-46d9-9864-a2f358e06c5b 00:14:48.152 13:13:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:48.152 13:13:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:48.718 [2024-07-15 13:13:45.152592] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:48.718 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd279723-1178-46d9-9864-a2f358e06c5b 00:14:48.718 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:48.718 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd279723-1178-46d9-9864-a2f358e06c5b 00:14:48.718 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.718 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.718 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.718 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.718 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.718 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.718 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.718 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:48.718 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd279723-1178-46d9-9864-a2f358e06c5b 00:14:48.976 2024/07/15 13:13:45 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:fd279723-1178-46d9-9864-a2f358e06c5b], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:48.976 request: 00:14:48.976 { 00:14:48.976 "method": "bdev_lvol_get_lvstores", 00:14:48.976 "params": { 00:14:48.976 "uuid": "fd279723-1178-46d9-9864-a2f358e06c5b" 00:14:48.976 } 00:14:48.976 } 00:14:48.976 Got JSON-RPC error response 00:14:48.976 GoRPCClient: error on JSON-RPC call 00:14:48.976 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:48.976 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:48.976 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:48.976 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:48.976 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:49.233 aio_bdev 00:14:49.233 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1510acec-086a-464c-8c53-d6fa60f7546a 00:14:49.233 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=1510acec-086a-464c-8c53-d6fa60f7546a 00:14:49.233 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:49.233 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:14:49.233 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:49.233 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:49.233 13:13:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:49.491 13:13:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1510acec-086a-464c-8c53-d6fa60f7546a -t 2000 00:14:49.749 [ 00:14:49.749 { 00:14:49.749 "aliases": [ 00:14:49.749 "lvs/lvol" 00:14:49.749 ], 00:14:49.749 "assigned_rate_limits": { 00:14:49.749 "r_mbytes_per_sec": 0, 00:14:49.749 "rw_ios_per_sec": 0, 00:14:49.749 "rw_mbytes_per_sec": 0, 00:14:49.749 "w_mbytes_per_sec": 0 00:14:49.749 }, 00:14:49.749 "block_size": 4096, 00:14:49.749 "claimed": false, 00:14:49.749 "driver_specific": { 00:14:49.749 "lvol": { 00:14:49.749 "base_bdev": "aio_bdev", 00:14:49.749 "clone": false, 00:14:49.749 "esnap_clone": false, 00:14:49.749 "lvol_store_uuid": "fd279723-1178-46d9-9864-a2f358e06c5b", 00:14:49.749 "num_allocated_clusters": 38, 00:14:49.749 "snapshot": false, 00:14:49.749 "thin_provision": false 00:14:49.749 } 00:14:49.749 }, 00:14:49.749 "name": "1510acec-086a-464c-8c53-d6fa60f7546a", 00:14:49.749 "num_blocks": 38912, 00:14:49.749 "product_name": "Logical Volume", 00:14:49.749 "supported_io_types": { 00:14:49.749 "abort": false, 00:14:49.749 "compare": false, 00:14:49.749 "compare_and_write": false, 00:14:49.749 "flush": false, 00:14:49.749 "nvme_admin": false, 00:14:49.749 "nvme_io": false, 00:14:49.749 "read": true, 00:14:49.749 "reset": true, 00:14:49.749 "unmap": true, 00:14:49.749 "write": true, 00:14:49.749 "write_zeroes": true 00:14:49.749 }, 00:14:49.749 "uuid": "1510acec-086a-464c-8c53-d6fa60f7546a", 00:14:49.749 "zoned": false 00:14:49.749 } 00:14:49.749 ] 00:14:49.749 13:13:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:14:49.749 13:13:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd279723-1178-46d9-9864-a2f358e06c5b 00:14:49.749 13:13:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:50.007 13:13:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:50.007 13:13:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:50.007 13:13:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd279723-1178-46d9-9864-a2f358e06c5b 00:14:50.265 13:13:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:50.265 13:13:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1510acec-086a-464c-8c53-d6fa60f7546a 00:14:50.522 13:13:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fd279723-1178-46d9-9864-a2f358e06c5b 00:14:50.780 13:13:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:51.346 13:13:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:51.604 00:14:51.604 real 0m20.984s 00:14:51.604 user 0m44.301s 00:14:51.604 sys 0m7.961s 00:14:51.604 13:13:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:51.604 13:13:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:51.604 ************************************ 00:14:51.604 END TEST lvs_grow_dirty 00:14:51.604 ************************************ 00:14:51.604 13:13:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:51.604 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:14:51.604 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:14:51.604 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:14:51.604 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:51.604 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:14:51.604 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:14:51.604 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:14:51.604 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:51.604 nvmf_trace.0 00:14:51.604 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:14:51.604 13:13:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:51.604 13:13:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:51.604 13:13:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:51.862 13:13:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:51.862 13:13:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:51.862 13:13:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:51.862 13:13:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:51.862 rmmod nvme_tcp 00:14:51.862 rmmod nvme_fabrics 00:14:51.862 rmmod nvme_keyring 00:14:51.862 13:13:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:51.862 13:13:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:51.862 13:13:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:51.862 13:13:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 89793 ']' 00:14:51.862 13:13:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 89793 00:14:51.862 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 89793 ']' 00:14:51.862 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 89793 00:14:51.862 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:14:51.862 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:51.862 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89793 00:14:51.862 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:51.862 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:51.862 killing process with pid 89793 00:14:51.862 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89793' 00:14:51.862 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 89793 00:14:51.862 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 89793 00:14:52.119 13:13:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:52.119 13:13:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:52.119 13:13:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:52.119 13:13:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:52.119 13:13:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:52.119 13:13:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.119 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.119 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.119 13:13:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:52.119 ************************************ 00:14:52.119 END TEST nvmf_lvs_grow 00:14:52.119 ************************************ 00:14:52.119 00:14:52.119 real 0m42.193s 00:14:52.119 user 1m8.768s 00:14:52.119 sys 0m10.960s 00:14:52.119 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:52.119 13:13:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:52.119 13:13:48 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:52.119 13:13:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:52.119 13:13:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:52.119 13:13:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:52.119 ************************************ 00:14:52.119 START TEST nvmf_bdev_io_wait 00:14:52.119 ************************************ 00:14:52.119 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:52.119 * Looking for test storage... 00:14:52.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:52.377 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.377 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:52.377 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.377 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.377 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.377 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.377 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.377 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:52.378 Cannot find device "nvmf_tgt_br" 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.378 Cannot find device "nvmf_tgt_br2" 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:52.378 Cannot find device "nvmf_tgt_br" 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:52.378 Cannot find device "nvmf_tgt_br2" 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:52.378 13:13:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:52.378 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.378 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:14:52.378 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.378 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:14:52.378 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:52.378 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:52.378 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:52.378 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:52.378 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:52.378 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:52.378 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:52.378 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:52.378 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:52.378 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:52.378 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:52.378 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:52.378 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:52.378 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:52.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:14:52.637 00:14:52.637 --- 10.0.0.2 ping statistics --- 00:14:52.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.637 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:52.637 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:52.637 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:14:52.637 00:14:52.637 --- 10.0.0.3 ping statistics --- 00:14:52.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.637 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:52.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:52.637 00:14:52.637 --- 10.0.0.1 ping statistics --- 00:14:52.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.637 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=90203 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 90203 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 90203 ']' 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:52.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:52.637 13:13:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:52.637 [2024-07-15 13:13:49.296930] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:52.637 [2024-07-15 13:13:49.297519] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.896 [2024-07-15 13:13:49.434941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:52.896 [2024-07-15 13:13:49.538708] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.896 [2024-07-15 13:13:49.538788] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.896 [2024-07-15 13:13:49.538815] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.896 [2024-07-15 13:13:49.538830] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.896 [2024-07-15 13:13:49.538839] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.896 [2024-07-15 13:13:49.538971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.896 [2024-07-15 13:13:49.539341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.896 [2024-07-15 13:13:49.539646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:52.896 [2024-07-15 13:13:49.539665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.829 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:53.830 [2024-07-15 13:13:50.445145] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:53.830 Malloc0 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:53.830 [2024-07-15 13:13:50.499920] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=90256 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=90258 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:53.830 { 00:14:53.830 "params": { 00:14:53.830 "name": "Nvme$subsystem", 00:14:53.830 "trtype": "$TEST_TRANSPORT", 00:14:53.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:53.830 "adrfam": "ipv4", 00:14:53.830 "trsvcid": "$NVMF_PORT", 00:14:53.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:53.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:53.830 "hdgst": ${hdgst:-false}, 00:14:53.830 "ddgst": ${ddgst:-false} 00:14:53.830 }, 00:14:53.830 "method": "bdev_nvme_attach_controller" 00:14:53.830 } 00:14:53.830 EOF 00:14:53.830 )") 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=90260 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:53.830 { 00:14:53.830 "params": { 00:14:53.830 "name": "Nvme$subsystem", 00:14:53.830 "trtype": "$TEST_TRANSPORT", 00:14:53.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:53.830 "adrfam": "ipv4", 00:14:53.830 "trsvcid": "$NVMF_PORT", 00:14:53.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:53.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:53.830 "hdgst": ${hdgst:-false}, 00:14:53.830 "ddgst": ${ddgst:-false} 00:14:53.830 }, 00:14:53.830 "method": "bdev_nvme_attach_controller" 00:14:53.830 } 00:14:53.830 EOF 00:14:53.830 )") 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=90263 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:53.830 { 00:14:53.830 "params": { 00:14:53.830 "name": "Nvme$subsystem", 00:14:53.830 "trtype": "$TEST_TRANSPORT", 00:14:53.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:53.830 "adrfam": "ipv4", 00:14:53.830 "trsvcid": "$NVMF_PORT", 00:14:53.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:53.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:53.830 "hdgst": ${hdgst:-false}, 00:14:53.830 "ddgst": ${ddgst:-false} 00:14:53.830 }, 00:14:53.830 "method": "bdev_nvme_attach_controller" 00:14:53.830 } 00:14:53.830 EOF 00:14:53.830 )") 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:53.830 "params": { 00:14:53.830 "name": "Nvme1", 00:14:53.830 "trtype": "tcp", 00:14:53.830 "traddr": "10.0.0.2", 00:14:53.830 "adrfam": "ipv4", 00:14:53.830 "trsvcid": "4420", 00:14:53.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:53.830 "hdgst": false, 00:14:53.830 "ddgst": false 00:14:53.830 }, 00:14:53.830 "method": "bdev_nvme_attach_controller" 00:14:53.830 }' 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:53.830 "params": { 00:14:53.830 "name": "Nvme1", 00:14:53.830 "trtype": "tcp", 00:14:53.830 "traddr": "10.0.0.2", 00:14:53.830 "adrfam": "ipv4", 00:14:53.830 "trsvcid": "4420", 00:14:53.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:53.830 "hdgst": false, 00:14:53.830 "ddgst": false 00:14:53.830 }, 00:14:53.830 "method": "bdev_nvme_attach_controller" 00:14:53.830 }' 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:53.830 { 00:14:53.830 "params": { 00:14:53.830 "name": "Nvme$subsystem", 00:14:53.830 "trtype": "$TEST_TRANSPORT", 00:14:53.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:53.830 "adrfam": "ipv4", 00:14:53.830 "trsvcid": "$NVMF_PORT", 00:14:53.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:53.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:53.830 "hdgst": ${hdgst:-false}, 00:14:53.830 "ddgst": ${ddgst:-false} 00:14:53.830 }, 00:14:53.830 "method": "bdev_nvme_attach_controller" 00:14:53.830 } 00:14:53.830 EOF 00:14:53.830 )") 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:53.830 "params": { 00:14:53.830 "name": "Nvme1", 00:14:53.830 "trtype": "tcp", 00:14:53.830 "traddr": "10.0.0.2", 00:14:53.830 "adrfam": "ipv4", 00:14:53.830 "trsvcid": "4420", 00:14:53.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:53.830 "hdgst": false, 00:14:53.830 "ddgst": false 00:14:53.830 }, 00:14:53.830 "method": "bdev_nvme_attach_controller" 00:14:53.830 }' 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:53.830 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:53.830 "params": { 00:14:53.830 "name": "Nvme1", 00:14:53.830 "trtype": "tcp", 00:14:53.830 "traddr": "10.0.0.2", 00:14:53.830 "adrfam": "ipv4", 00:14:53.830 "trsvcid": "4420", 00:14:53.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:53.830 "hdgst": false, 00:14:53.830 "ddgst": false 00:14:53.830 }, 00:14:53.830 "method": "bdev_nvme_attach_controller" 00:14:53.830 }' 00:14:53.830 [2024-07-15 13:13:50.562799] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:53.830 [2024-07-15 13:13:50.562882] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:54.087 [2024-07-15 13:13:50.581155] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:54.087 [2024-07-15 13:13:50.581248] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:54.087 13:13:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 90256 00:14:54.087 [2024-07-15 13:13:50.593483] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:54.087 [2024-07-15 13:13:50.594293] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:54.087 [2024-07-15 13:13:50.594362] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:54.087 [2024-07-15 13:13:50.595033] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:54.087 [2024-07-15 13:13:50.768451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.345 [2024-07-15 13:13:50.843538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.345 [2024-07-15 13:13:50.844560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:54.345 [2024-07-15 13:13:50.918393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.345 [2024-07-15 13:13:50.924641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:54.345 [2024-07-15 13:13:50.991042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:54.345 [2024-07-15 13:13:50.991319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.345 Running I/O for 1 seconds... 00:14:54.345 [2024-07-15 13:13:51.060801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:54.345 Running I/O for 1 seconds... 00:14:54.603 Running I/O for 1 seconds... 00:14:54.603 Running I/O for 1 seconds... 00:14:55.537 00:14:55.537 Latency(us) 00:14:55.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.537 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:55.537 Nvme1n1 : 1.01 9744.31 38.06 0.00 0.00 13081.41 7238.75 20971.52 00:14:55.537 =================================================================================================================== 00:14:55.537 Total : 9744.31 38.06 0.00 0.00 13081.41 7238.75 20971.52 00:14:55.537 00:14:55.537 Latency(us) 00:14:55.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.537 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:55.537 Nvme1n1 : 1.01 7624.45 29.78 0.00 0.00 16689.58 4974.78 20733.21 00:14:55.537 =================================================================================================================== 00:14:55.538 Total : 7624.45 29.78 0.00 0.00 16689.58 4974.78 20733.21 00:14:55.538 00:14:55.538 Latency(us) 00:14:55.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.538 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:55.538 Nvme1n1 : 1.01 8845.88 34.55 0.00 0.00 14415.11 6583.39 25976.09 00:14:55.538 =================================================================================================================== 00:14:55.538 Total : 8845.88 34.55 0.00 0.00 14415.11 6583.39 25976.09 00:14:55.538 00:14:55.538 Latency(us) 00:14:55.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.538 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:55.538 Nvme1n1 : 1.00 193637.78 756.40 0.00 0.00 658.26 275.55 983.04 00:14:55.538 =================================================================================================================== 00:14:55.538 Total : 193637.78 756.40 0.00 0.00 658.26 275.55 983.04 00:14:55.538 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 90258 00:14:55.796 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 90260 00:14:55.796 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 90263 00:14:55.796 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:55.796 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.796 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:55.796 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.796 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:55.796 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:55.796 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:55.796 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:56.053 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:56.053 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:56.054 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:56.054 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:56.054 rmmod nvme_tcp 00:14:56.054 rmmod nvme_fabrics 00:14:56.054 rmmod nvme_keyring 00:14:56.054 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:56.054 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:56.054 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:56.054 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 90203 ']' 00:14:56.054 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 90203 00:14:56.054 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 90203 ']' 00:14:56.054 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 90203 00:14:56.054 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:14:56.054 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:56.054 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90203 00:14:56.054 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:56.054 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:56.054 killing process with pid 90203 00:14:56.054 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90203' 00:14:56.054 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 90203 00:14:56.054 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 90203 00:14:56.311 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:56.311 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:56.311 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:56.311 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.311 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:56.311 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.311 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.311 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.311 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:56.311 00:14:56.311 real 0m4.083s 00:14:56.311 user 0m17.776s 00:14:56.311 sys 0m2.172s 00:14:56.311 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:56.311 13:13:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:56.311 ************************************ 00:14:56.311 END TEST nvmf_bdev_io_wait 00:14:56.312 ************************************ 00:14:56.312 13:13:52 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:56.312 13:13:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:56.312 13:13:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:56.312 13:13:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:56.312 ************************************ 00:14:56.312 START TEST nvmf_queue_depth 00:14:56.312 ************************************ 00:14:56.312 13:13:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:56.312 * Looking for test storage... 00:14:56.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:56.312 13:13:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:56.312 13:13:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:56.312 13:13:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.312 13:13:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.312 13:13:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.312 13:13:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.312 13:13:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.312 13:13:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.312 13:13:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.312 13:13:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.312 13:13:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.312 13:13:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:56.312 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:56.570 Cannot find device "nvmf_tgt_br" 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:56.570 Cannot find device "nvmf_tgt_br2" 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:56.570 Cannot find device "nvmf_tgt_br" 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:56.570 Cannot find device "nvmf_tgt_br2" 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:56.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:56.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:56.570 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:56.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:14:56.828 00:14:56.828 --- 10.0.0.2 ping statistics --- 00:14:56.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.828 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:56.828 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:56.828 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:14:56.828 00:14:56.828 --- 10.0.0.3 ping statistics --- 00:14:56.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.828 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:56.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:56.828 00:14:56.828 --- 10.0.0.1 ping statistics --- 00:14:56.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.828 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=90496 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 90496 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 90496 ']' 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:56.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:56.828 13:13:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:56.828 [2024-07-15 13:13:53.426789] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:56.828 [2024-07-15 13:13:53.426895] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.086 [2024-07-15 13:13:53.570568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.086 [2024-07-15 13:13:53.670079] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.086 [2024-07-15 13:13:53.670132] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.086 [2024-07-15 13:13:53.670147] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.086 [2024-07-15 13:13:53.670158] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.086 [2024-07-15 13:13:53.670167] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.086 [2024-07-15 13:13:53.670194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:58.019 [2024-07-15 13:13:54.500505] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:58.019 Malloc0 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:58.019 [2024-07-15 13:13:54.565644] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=90546 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 90546 /var/tmp/bdevperf.sock 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 90546 ']' 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:58.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:58.019 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:58.019 [2024-07-15 13:13:54.624820] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:58.019 [2024-07-15 13:13:54.624922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90546 ] 00:14:58.275 [2024-07-15 13:13:54.763777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.275 [2024-07-15 13:13:54.862884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.275 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:58.275 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:14:58.275 13:13:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:58.276 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.276 13:13:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:58.532 NVMe0n1 00:14:58.532 13:13:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.532 13:13:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:58.532 Running I/O for 10 seconds... 00:15:08.535 00:15:08.535 Latency(us) 00:15:08.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.535 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:08.535 Verification LBA range: start 0x0 length 0x4000 00:15:08.535 NVMe0n1 : 10.08 9111.39 35.59 0.00 0.00 111826.44 26452.71 77689.95 00:15:08.535 =================================================================================================================== 00:15:08.535 Total : 9111.39 35.59 0.00 0.00 111826.44 26452.71 77689.95 00:15:08.535 0 00:15:08.792 13:14:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 90546 00:15:08.792 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 90546 ']' 00:15:08.792 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 90546 00:15:08.792 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:15:08.792 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:08.792 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90546 00:15:08.792 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:08.792 killing process with pid 90546 00:15:08.792 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:08.792 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90546' 00:15:08.792 Received shutdown signal, test time was about 10.000000 seconds 00:15:08.792 00:15:08.792 Latency(us) 00:15:08.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.792 =================================================================================================================== 00:15:08.792 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:08.792 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 90546 00:15:08.792 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 90546 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:09.050 rmmod nvme_tcp 00:15:09.050 rmmod nvme_fabrics 00:15:09.050 rmmod nvme_keyring 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 90496 ']' 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 90496 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 90496 ']' 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 90496 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90496 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:09.050 killing process with pid 90496 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90496' 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 90496 00:15:09.050 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 90496 00:15:09.308 13:14:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:09.308 13:14:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:09.308 13:14:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:09.308 13:14:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:09.308 13:14:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:09.308 13:14:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.308 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.308 13:14:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.308 13:14:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:09.308 00:15:09.308 real 0m13.118s 00:15:09.308 user 0m22.268s 00:15:09.308 sys 0m2.129s 00:15:09.308 13:14:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:09.308 ************************************ 00:15:09.308 END TEST nvmf_queue_depth 00:15:09.308 ************************************ 00:15:09.308 13:14:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:09.567 13:14:06 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:09.567 13:14:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:09.567 13:14:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:09.567 13:14:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:09.567 ************************************ 00:15:09.567 START TEST nvmf_target_multipath 00:15:09.567 ************************************ 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:09.567 * Looking for test storage... 00:15:09.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:09.567 Cannot find device "nvmf_tgt_br" 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:15:09.567 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:09.567 Cannot find device "nvmf_tgt_br2" 00:15:09.568 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:15:09.568 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:09.568 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:09.568 Cannot find device "nvmf_tgt_br" 00:15:09.568 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:15:09.568 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:09.568 Cannot find device "nvmf_tgt_br2" 00:15:09.568 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:15:09.568 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:09.568 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:09.826 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:09.826 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:09.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:15:09.826 00:15:09.826 --- 10.0.0.2 ping statistics --- 00:15:09.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.826 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:09.826 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:09.826 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:15:09.826 00:15:09.826 --- 10.0.0.3 ping statistics --- 00:15:09.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.826 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:09.826 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:09.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:15:09.827 00:15:09.827 --- 10.0.0.1 ping statistics --- 00:15:09.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.827 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=90865 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 90865 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@827 -- # '[' -z 90865 ']' 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:09.827 13:14:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:10.086 [2024-07-15 13:14:06.571267] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:10.086 [2024-07-15 13:14:06.571354] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.086 [2024-07-15 13:14:06.710796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:10.086 [2024-07-15 13:14:06.818899] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.086 [2024-07-15 13:14:06.818986] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.086 [2024-07-15 13:14:06.819010] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.086 [2024-07-15 13:14:06.819027] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.086 [2024-07-15 13:14:06.819042] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.086 [2024-07-15 13:14:06.819196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.086 [2024-07-15 13:14:06.820047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.086 [2024-07-15 13:14:06.820148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:10.086 [2024-07-15 13:14:06.820164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.020 13:14:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:11.020 13:14:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@860 -- # return 0 00:15:11.020 13:14:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:11.020 13:14:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:11.021 13:14:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:11.021 13:14:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.021 13:14:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:11.279 [2024-07-15 13:14:07.827651] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.279 13:14:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:11.537 Malloc0 00:15:11.537 13:14:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:11.795 13:14:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:12.072 13:14:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.352 [2024-07-15 13:14:08.874585] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.352 13:14:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:12.611 [2024-07-15 13:14:09.106764] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:12.611 13:14:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:12.611 13:14:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:12.869 13:14:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:12.869 13:14:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1194 -- # local i=0 00:15:12.869 13:14:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:12.869 13:14:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:12.869 13:14:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # sleep 2 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # return 0 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=91004 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:15.398 13:14:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:15:15.398 [global] 00:15:15.398 thread=1 00:15:15.398 invalidate=1 00:15:15.398 rw=randrw 00:15:15.398 time_based=1 00:15:15.398 runtime=6 00:15:15.398 ioengine=libaio 00:15:15.398 direct=1 00:15:15.398 bs=4096 00:15:15.398 iodepth=128 00:15:15.398 norandommap=0 00:15:15.398 numjobs=1 00:15:15.398 00:15:15.398 verify_dump=1 00:15:15.398 verify_backlog=512 00:15:15.398 verify_state_save=0 00:15:15.398 do_verify=1 00:15:15.398 verify=crc32c-intel 00:15:15.398 [job0] 00:15:15.398 filename=/dev/nvme0n1 00:15:15.398 Could not set queue depth (nvme0n1) 00:15:15.398 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:15.398 fio-3.35 00:15:15.398 Starting 1 thread 00:15:15.962 13:14:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:16.220 13:14:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:16.477 13:14:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:16.477 13:14:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:16.477 13:14:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:16.477 13:14:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:16.477 13:14:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:16.477 13:14:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:16.477 13:14:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:16.477 13:14:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:16.477 13:14:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:16.477 13:14:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:16.477 13:14:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:16.477 13:14:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:16.477 13:14:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:17.408 13:14:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:17.408 13:14:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:17.408 13:14:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:17.409 13:14:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:17.701 13:14:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:17.959 13:14:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:17.959 13:14:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:17.959 13:14:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:17.959 13:14:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:17.959 13:14:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:17.959 13:14:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:17.959 13:14:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:17.959 13:14:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:17.959 13:14:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:17.959 13:14:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:17.959 13:14:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:17.959 13:14:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:17.959 13:14:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:18.893 13:14:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:18.893 13:14:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:18.893 13:14:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:18.893 13:14:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 91004 00:15:21.422 00:15:21.422 job0: (groupid=0, jobs=1): err= 0: pid=91025: Mon Jul 15 13:14:17 2024 00:15:21.422 read: IOPS=11.0k, BW=43.0MiB/s (45.1MB/s)(258MiB/6005msec) 00:15:21.422 slat (usec): min=2, max=6577, avg=51.74, stdev=234.16 00:15:21.422 clat (usec): min=689, max=15739, avg=7925.34, stdev=1202.90 00:15:21.422 lat (usec): min=744, max=15755, avg=7977.08, stdev=1212.52 00:15:21.422 clat percentiles (usec): 00:15:21.422 | 1.00th=[ 4752], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 7177], 00:15:21.422 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 8029], 00:15:21.422 | 70.00th=[ 8356], 80.00th=[ 8717], 90.00th=[ 9241], 95.00th=[10028], 00:15:21.422 | 99.00th=[11731], 99.50th=[12125], 99.90th=[13042], 99.95th=[13304], 00:15:21.422 | 99.99th=[13960] 00:15:21.422 bw ( KiB/s): min=11704, max=29992, per=52.64%, avg=23186.42, stdev=6109.60, samples=12 00:15:21.422 iops : min= 2926, max= 7498, avg=5796.58, stdev=1527.39, samples=12 00:15:21.422 write: IOPS=6652, BW=26.0MiB/s (27.2MB/s)(136MiB/5234msec); 0 zone resets 00:15:21.422 slat (usec): min=3, max=5297, avg=63.80, stdev=160.46 00:15:21.422 clat (usec): min=521, max=12982, avg=6811.55, stdev=1015.09 00:15:21.422 lat (usec): min=633, max=13007, avg=6875.35, stdev=1019.19 00:15:21.422 clat percentiles (usec): 00:15:21.422 | 1.00th=[ 3785], 5.00th=[ 5014], 10.00th=[ 5800], 20.00th=[ 6259], 00:15:21.422 | 30.00th=[ 6521], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 7046], 00:15:21.422 | 70.00th=[ 7177], 80.00th=[ 7439], 90.00th=[ 7701], 95.00th=[ 8029], 00:15:21.422 | 99.00th=[10159], 99.50th=[10814], 99.90th=[12256], 99.95th=[12649], 00:15:21.422 | 99.99th=[12911] 00:15:21.422 bw ( KiB/s): min=12328, max=29352, per=87.17%, avg=23196.25, stdev=5756.61, samples=12 00:15:21.422 iops : min= 3082, max= 7338, avg=5799.00, stdev=1439.12, samples=12 00:15:21.422 lat (usec) : 750=0.01%, 1000=0.01% 00:15:21.422 lat (msec) : 2=0.07%, 4=0.64%, 10=95.63%, 20=3.64% 00:15:21.422 cpu : usr=5.46%, sys=23.61%, ctx=6623, majf=0, minf=133 00:15:21.422 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:21.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:21.422 issued rwts: total=66130,34821,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:21.422 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:21.422 00:15:21.422 Run status group 0 (all jobs): 00:15:21.422 READ: bw=43.0MiB/s (45.1MB/s), 43.0MiB/s-43.0MiB/s (45.1MB/s-45.1MB/s), io=258MiB (271MB), run=6005-6005msec 00:15:21.422 WRITE: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=136MiB (143MB), run=5234-5234msec 00:15:21.422 00:15:21.422 Disk stats (read/write): 00:15:21.422 nvme0n1: ios=65349/33948, merge=0/0, ticks=484164/215530, in_queue=699694, util=98.53% 00:15:21.422 13:14:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:21.680 13:14:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:21.938 13:14:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:21.938 13:14:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:21.938 13:14:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:21.938 13:14:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:21.938 13:14:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:21.938 13:14:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:21.938 13:14:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:21.938 13:14:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:21.938 13:14:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:21.938 13:14:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:21.938 13:14:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:21.938 13:14:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:21.938 13:14:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:22.871 13:14:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:22.871 13:14:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:22.871 13:14:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:22.871 13:14:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:15:22.871 13:14:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=91157 00:15:22.871 13:14:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:22.871 13:14:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:15:22.871 [global] 00:15:22.871 thread=1 00:15:22.871 invalidate=1 00:15:22.871 rw=randrw 00:15:22.871 time_based=1 00:15:22.871 runtime=6 00:15:22.871 ioengine=libaio 00:15:22.871 direct=1 00:15:22.871 bs=4096 00:15:22.871 iodepth=128 00:15:22.871 norandommap=0 00:15:22.871 numjobs=1 00:15:22.871 00:15:22.871 verify_dump=1 00:15:22.871 verify_backlog=512 00:15:22.871 verify_state_save=0 00:15:22.871 do_verify=1 00:15:22.871 verify=crc32c-intel 00:15:22.871 [job0] 00:15:22.871 filename=/dev/nvme0n1 00:15:22.871 Could not set queue depth (nvme0n1) 00:15:23.128 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:23.128 fio-3.35 00:15:23.128 Starting 1 thread 00:15:24.061 13:14:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:24.319 13:14:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:24.577 13:14:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:24.577 13:14:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:24.577 13:14:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:24.577 13:14:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:24.577 13:14:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:24.577 13:14:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:24.577 13:14:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:24.577 13:14:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:24.577 13:14:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:24.577 13:14:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:24.577 13:14:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:24.577 13:14:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:24.577 13:14:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:25.511 13:14:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:25.511 13:14:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:25.511 13:14:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:25.511 13:14:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:25.769 13:14:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:26.027 13:14:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:26.027 13:14:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:26.028 13:14:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:26.028 13:14:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:26.028 13:14:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:26.028 13:14:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:26.028 13:14:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:26.028 13:14:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:26.028 13:14:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:26.028 13:14:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:26.028 13:14:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:26.028 13:14:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:26.028 13:14:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:26.960 13:14:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:26.960 13:14:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:26.960 13:14:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:26.960 13:14:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 91157 00:15:29.486 00:15:29.486 job0: (groupid=0, jobs=1): err= 0: pid=91179: Mon Jul 15 13:14:25 2024 00:15:29.486 read: IOPS=11.9k, BW=46.4MiB/s (48.6MB/s)(278MiB/6004msec) 00:15:29.486 slat (usec): min=5, max=6327, avg=43.80, stdev=198.80 00:15:29.486 clat (usec): min=191, max=18166, avg=7378.69, stdev=1765.60 00:15:29.486 lat (usec): min=218, max=18184, avg=7422.49, stdev=1780.43 00:15:29.486 clat percentiles (usec): 00:15:29.486 | 1.00th=[ 3097], 5.00th=[ 4359], 10.00th=[ 5014], 20.00th=[ 5932], 00:15:29.486 | 30.00th=[ 6849], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7701], 00:15:29.486 | 70.00th=[ 8094], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[10290], 00:15:29.486 | 99.00th=[12125], 99.50th=[12780], 99.90th=[14877], 99.95th=[15270], 00:15:29.486 | 99.99th=[17171] 00:15:29.486 bw ( KiB/s): min= 9720, max=35488, per=53.07%, avg=25200.73, stdev=7485.31, samples=11 00:15:29.486 iops : min= 2430, max= 8872, avg=6300.18, stdev=1871.33, samples=11 00:15:29.486 write: IOPS=6858, BW=26.8MiB/s (28.1MB/s)(145MiB/5407msec); 0 zone resets 00:15:29.486 slat (usec): min=12, max=2817, avg=59.24, stdev=125.27 00:15:29.486 clat (usec): min=272, max=15889, avg=6354.32, stdev=1646.42 00:15:29.486 lat (usec): min=327, max=15942, avg=6413.57, stdev=1656.71 00:15:29.486 clat percentiles (usec): 00:15:29.486 | 1.00th=[ 2343], 5.00th=[ 3490], 10.00th=[ 4047], 20.00th=[ 4883], 00:15:29.486 | 30.00th=[ 5800], 40.00th=[ 6325], 50.00th=[ 6587], 60.00th=[ 6849], 00:15:29.486 | 70.00th=[ 7111], 80.00th=[ 7439], 90.00th=[ 7963], 95.00th=[ 8979], 00:15:29.486 | 99.00th=[10683], 99.50th=[11207], 99.90th=[12649], 99.95th=[13566], 00:15:29.486 | 99.99th=[14615] 00:15:29.486 bw ( KiB/s): min=10240, max=36176, per=91.69%, avg=25154.91, stdev=7226.49, samples=11 00:15:29.486 iops : min= 2560, max= 9044, avg=6288.73, stdev=1806.62, samples=11 00:15:29.486 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.03% 00:15:29.486 lat (msec) : 2=0.35%, 4=4.85%, 10=89.86%, 20=4.89% 00:15:29.486 cpu : usr=6.45%, sys=29.49%, ctx=10010, majf=0, minf=84 00:15:29.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:29.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:29.486 issued rwts: total=71278,37083,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:29.486 00:15:29.486 Run status group 0 (all jobs): 00:15:29.486 READ: bw=46.4MiB/s (48.6MB/s), 46.4MiB/s-46.4MiB/s (48.6MB/s-48.6MB/s), io=278MiB (292MB), run=6004-6004msec 00:15:29.486 WRITE: bw=26.8MiB/s (28.1MB/s), 26.8MiB/s-26.8MiB/s (28.1MB/s-28.1MB/s), io=145MiB (152MB), run=5407-5407msec 00:15:29.486 00:15:29.486 Disk stats (read/write): 00:15:29.486 nvme0n1: ios=69704/37083, merge=0/0, ticks=457667/199439, in_queue=657106, util=98.60% 00:15:29.486 13:14:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:29.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:29.486 13:14:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:29.486 13:14:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1215 -- # local i=0 00:15:29.486 13:14:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:29.486 13:14:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.486 13:14:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:29.486 13:14:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.486 13:14:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # return 0 00:15:29.486 13:14:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.486 13:14:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:29.486 13:14:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:29.486 13:14:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:29.486 13:14:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:15:29.486 13:14:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:29.486 13:14:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:29.486 13:14:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:29.486 13:14:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:29.486 13:14:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:29.486 13:14:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:29.486 rmmod nvme_tcp 00:15:29.486 rmmod nvme_fabrics 00:15:29.486 rmmod nvme_keyring 00:15:29.743 13:14:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:29.743 13:14:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:29.743 13:14:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:29.743 13:14:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 90865 ']' 00:15:29.743 13:14:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 90865 00:15:29.743 13:14:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@946 -- # '[' -z 90865 ']' 00:15:29.743 13:14:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@950 -- # kill -0 90865 00:15:29.743 13:14:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # uname 00:15:29.743 13:14:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:29.743 13:14:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90865 00:15:29.743 13:14:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:29.743 killing process with pid 90865 00:15:29.743 13:14:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:29.743 13:14:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90865' 00:15:29.743 13:14:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@965 -- # kill 90865 00:15:29.743 13:14:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@970 -- # wait 90865 00:15:30.000 13:14:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:30.000 13:14:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:30.000 13:14:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:30.000 13:14:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:30.000 13:14:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:30.000 13:14:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.000 13:14:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.000 13:14:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.000 13:14:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:30.000 00:15:30.000 real 0m20.455s 00:15:30.000 user 1m20.377s 00:15:30.000 sys 0m6.768s 00:15:30.000 13:14:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:30.000 13:14:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:30.000 ************************************ 00:15:30.000 END TEST nvmf_target_multipath 00:15:30.000 ************************************ 00:15:30.000 13:14:26 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:30.000 13:14:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:30.000 13:14:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:30.000 13:14:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:30.000 ************************************ 00:15:30.000 START TEST nvmf_zcopy 00:15:30.000 ************************************ 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:30.000 * Looking for test storage... 00:15:30.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.000 13:14:26 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:30.001 Cannot find device "nvmf_tgt_br" 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:30.001 Cannot find device "nvmf_tgt_br2" 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:30.001 Cannot find device "nvmf_tgt_br" 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:30.001 Cannot find device "nvmf_tgt_br2" 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:15:30.001 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:30.258 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:30.258 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:30.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:15:30.258 00:15:30.258 --- 10.0.0.2 ping statistics --- 00:15:30.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.258 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:30.258 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:30.258 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:15:30.258 00:15:30.258 --- 10.0.0.3 ping statistics --- 00:15:30.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.258 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:30.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:15:30.258 00:15:30.258 --- 10.0.0.1 ping statistics --- 00:15:30.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.258 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:30.258 13:14:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:30.516 13:14:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:30.516 13:14:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:30.516 13:14:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:30.516 13:14:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:30.516 13:14:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=91456 00:15:30.516 13:14:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:30.516 13:14:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 91456 00:15:30.516 13:14:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 91456 ']' 00:15:30.516 13:14:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.516 13:14:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:30.516 13:14:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.516 13:14:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:30.516 13:14:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:30.516 [2024-07-15 13:14:27.058912] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:30.516 [2024-07-15 13:14:27.059007] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.516 [2024-07-15 13:14:27.191101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.774 [2024-07-15 13:14:27.289607] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.774 [2024-07-15 13:14:27.289661] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.774 [2024-07-15 13:14:27.289674] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.774 [2024-07-15 13:14:27.289683] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.774 [2024-07-15 13:14:27.289692] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.774 [2024-07-15 13:14:27.289730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.704 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:31.704 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:15:31.704 13:14:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:31.704 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:31.704 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:31.704 13:14:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.704 13:14:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:31.704 13:14:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:31.704 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.704 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:31.704 [2024-07-15 13:14:28.134094] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.704 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.704 13:14:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:31.704 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.704 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:31.704 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.704 13:14:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:31.704 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.704 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:31.704 [2024-07-15 13:14:28.154223] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:31.704 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.704 13:14:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:31.704 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.705 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:31.705 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.705 13:14:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:31.705 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.705 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:31.705 malloc0 00:15:31.705 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.705 13:14:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:31.705 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.705 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:31.705 13:14:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.705 13:14:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:31.705 13:14:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:31.705 13:14:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:31.705 13:14:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:31.705 13:14:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:31.705 13:14:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:31.705 { 00:15:31.705 "params": { 00:15:31.705 "name": "Nvme$subsystem", 00:15:31.705 "trtype": "$TEST_TRANSPORT", 00:15:31.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:31.705 "adrfam": "ipv4", 00:15:31.705 "trsvcid": "$NVMF_PORT", 00:15:31.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:31.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:31.705 "hdgst": ${hdgst:-false}, 00:15:31.705 "ddgst": ${ddgst:-false} 00:15:31.705 }, 00:15:31.705 "method": "bdev_nvme_attach_controller" 00:15:31.705 } 00:15:31.705 EOF 00:15:31.705 )") 00:15:31.705 13:14:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:31.705 13:14:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:31.705 13:14:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:31.705 13:14:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:31.705 "params": { 00:15:31.705 "name": "Nvme1", 00:15:31.705 "trtype": "tcp", 00:15:31.705 "traddr": "10.0.0.2", 00:15:31.705 "adrfam": "ipv4", 00:15:31.705 "trsvcid": "4420", 00:15:31.705 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.705 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:31.705 "hdgst": false, 00:15:31.705 "ddgst": false 00:15:31.705 }, 00:15:31.705 "method": "bdev_nvme_attach_controller" 00:15:31.705 }' 00:15:31.705 [2024-07-15 13:14:28.248575] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:31.705 [2024-07-15 13:14:28.248674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91507 ] 00:15:31.705 [2024-07-15 13:14:28.391990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.018 [2024-07-15 13:14:28.498261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.018 Running I/O for 10 seconds... 00:15:42.019 00:15:42.019 Latency(us) 00:15:42.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.019 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:42.019 Verification LBA range: start 0x0 length 0x1000 00:15:42.019 Nvme1n1 : 10.02 5532.71 43.22 0.00 0.00 23060.77 2561.86 35508.60 00:15:42.019 =================================================================================================================== 00:15:42.019 Total : 5532.71 43.22 0.00 0.00 23060.77 2561.86 35508.60 00:15:42.277 13:14:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=91624 00:15:42.277 13:14:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:42.277 13:14:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:42.277 13:14:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:42.277 13:14:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:42.277 13:14:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:42.277 13:14:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:42.277 13:14:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:42.277 13:14:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:42.277 { 00:15:42.277 "params": { 00:15:42.277 "name": "Nvme$subsystem", 00:15:42.277 "trtype": "$TEST_TRANSPORT", 00:15:42.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:42.277 "adrfam": "ipv4", 00:15:42.277 "trsvcid": "$NVMF_PORT", 00:15:42.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:42.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:42.277 "hdgst": ${hdgst:-false}, 00:15:42.277 "ddgst": ${ddgst:-false} 00:15:42.277 }, 00:15:42.277 "method": "bdev_nvme_attach_controller" 00:15:42.277 } 00:15:42.277 EOF 00:15:42.277 )") 00:15:42.277 13:14:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:42.277 13:14:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:42.277 [2024-07-15 13:14:38.922788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.277 [2024-07-15 13:14:38.922838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.277 13:14:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:42.277 13:14:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:42.277 "params": { 00:15:42.277 "name": "Nvme1", 00:15:42.277 "trtype": "tcp", 00:15:42.277 "traddr": "10.0.0.2", 00:15:42.277 "adrfam": "ipv4", 00:15:42.277 "trsvcid": "4420", 00:15:42.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:42.277 "hdgst": false, 00:15:42.277 "ddgst": false 00:15:42.277 }, 00:15:42.277 "method": "bdev_nvme_attach_controller" 00:15:42.277 }' 00:15:42.277 2024/07/15 13:14:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.277 [2024-07-15 13:14:38.934755] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.277 [2024-07-15 13:14:38.934790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.277 2024/07/15 13:14:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.277 [2024-07-15 13:14:38.946765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.277 [2024-07-15 13:14:38.946801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.277 2024/07/15 13:14:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.277 [2024-07-15 13:14:38.958771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.277 [2024-07-15 13:14:38.958814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.277 2024/07/15 13:14:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.277 [2024-07-15 13:14:38.970774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.277 [2024-07-15 13:14:38.970814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.277 2024/07/15 13:14:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.277 [2024-07-15 13:14:38.982787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.277 [2024-07-15 13:14:38.982832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.277 [2024-07-15 13:14:38.983661] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:42.277 [2024-07-15 13:14:38.983732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91624 ] 00:15:42.277 2024/07/15 13:14:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.277 [2024-07-15 13:14:38.990744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.277 [2024-07-15 13:14:38.990775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.277 2024/07/15 13:14:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.277 [2024-07-15 13:14:38.998746] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.277 [2024-07-15 13:14:38.998783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.277 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.277 [2024-07-15 13:14:39.010776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.277 [2024-07-15 13:14:39.010815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.277 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.022817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.022864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.034796] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.034840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.046803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.046849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.058827] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.058889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.070821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.070869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.078792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.078837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.090815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.090860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.098793] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.098832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.110819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.110863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.122647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.537 [2024-07-15 13:14:39.122816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.122840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.134841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.134891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.146830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.146873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.158845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.158887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.170841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.170884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.182864] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.182910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.194862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.194908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.206846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.206889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.218861] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.218908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.227318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.537 [2024-07-15 13:14:39.230852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.230889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.242862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.242911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.254870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.254920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.262855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.262898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.537 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.537 [2024-07-15 13:14:39.274879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.537 [2024-07-15 13:14:39.274917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.796 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.796 [2024-07-15 13:14:39.286879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.796 [2024-07-15 13:14:39.286928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.796 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.796 [2024-07-15 13:14:39.298884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.796 [2024-07-15 13:14:39.298935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.796 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.796 [2024-07-15 13:14:39.310892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.796 [2024-07-15 13:14:39.310943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.796 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.796 [2024-07-15 13:14:39.322896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.796 [2024-07-15 13:14:39.322940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.796 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.796 [2024-07-15 13:14:39.334883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.796 [2024-07-15 13:14:39.334926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.796 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.796 [2024-07-15 13:14:39.346907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.796 [2024-07-15 13:14:39.346955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.796 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.796 [2024-07-15 13:14:39.358896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.796 [2024-07-15 13:14:39.358940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.796 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.796 [2024-07-15 13:14:39.370915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.796 [2024-07-15 13:14:39.370958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.796 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.796 [2024-07-15 13:14:39.382911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.796 [2024-07-15 13:14:39.382953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.796 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.796 [2024-07-15 13:14:39.394915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.796 [2024-07-15 13:14:39.394959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.796 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.796 [2024-07-15 13:14:39.406920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.796 [2024-07-15 13:14:39.406967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.796 Running I/O for 5 seconds... 00:15:42.796 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.796 [2024-07-15 13:14:39.425227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.796 [2024-07-15 13:14:39.425286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.797 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.797 [2024-07-15 13:14:39.442170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.797 [2024-07-15 13:14:39.442250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.797 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.797 [2024-07-15 13:14:39.457797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.797 [2024-07-15 13:14:39.457855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.797 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.797 [2024-07-15 13:14:39.468428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.797 [2024-07-15 13:14:39.468478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.797 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.797 [2024-07-15 13:14:39.481741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.797 [2024-07-15 13:14:39.481804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.797 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.797 [2024-07-15 13:14:39.493388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.797 [2024-07-15 13:14:39.493438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.797 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.797 [2024-07-15 13:14:39.505253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.797 [2024-07-15 13:14:39.505294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.797 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.797 [2024-07-15 13:14:39.523169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.797 [2024-07-15 13:14:39.523246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.797 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.055 [2024-07-15 13:14:39.540524] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.055 [2024-07-15 13:14:39.540588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.055 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.055 [2024-07-15 13:14:39.556758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.055 [2024-07-15 13:14:39.556820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.056 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.056 [2024-07-15 13:14:39.573811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.056 [2024-07-15 13:14:39.575445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.056 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.056 [2024-07-15 13:14:39.591844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.056 [2024-07-15 13:14:39.591913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.056 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.056 [2024-07-15 13:14:39.607095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.056 [2024-07-15 13:14:39.607159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.056 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.056 [2024-07-15 13:14:39.617749] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.056 [2024-07-15 13:14:39.617946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.056 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.056 [2024-07-15 13:14:39.631057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.056 [2024-07-15 13:14:39.631312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.056 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.056 [2024-07-15 13:14:39.646997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.056 [2024-07-15 13:14:39.647281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.056 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.056 [2024-07-15 13:14:39.663078] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.056 [2024-07-15 13:14:39.663145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.056 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.056 [2024-07-15 13:14:39.679626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.056 [2024-07-15 13:14:39.679692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.056 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.056 [2024-07-15 13:14:39.690107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.056 [2024-07-15 13:14:39.690330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.056 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.056 [2024-07-15 13:14:39.705653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.056 [2024-07-15 13:14:39.705906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.056 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.056 [2024-07-15 13:14:39.721125] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.056 [2024-07-15 13:14:39.721426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.056 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.056 [2024-07-15 13:14:39.731895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.056 [2024-07-15 13:14:39.732125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.056 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.056 [2024-07-15 13:14:39.747119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.056 [2024-07-15 13:14:39.747428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.056 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.056 [2024-07-15 13:14:39.764558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.056 [2024-07-15 13:14:39.764820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.056 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.056 [2024-07-15 13:14:39.781146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.056 [2024-07-15 13:14:39.781222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.056 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.313 [2024-07-15 13:14:39.798051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.313 [2024-07-15 13:14:39.798322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.313 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.313 [2024-07-15 13:14:39.814147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.313 [2024-07-15 13:14:39.814442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.313 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.313 [2024-07-15 13:14:39.831162] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.313 [2024-07-15 13:14:39.831507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.313 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.313 [2024-07-15 13:14:39.848854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.313 [2024-07-15 13:14:39.849145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.313 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.313 [2024-07-15 13:14:39.865333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.313 [2024-07-15 13:14:39.865594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.313 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.313 [2024-07-15 13:14:39.881898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.313 [2024-07-15 13:14:39.882172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.313 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.313 [2024-07-15 13:14:39.893300] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.313 [2024-07-15 13:14:39.893551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.313 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.313 [2024-07-15 13:14:39.907928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.313 [2024-07-15 13:14:39.907989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.313 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.313 [2024-07-15 13:14:39.924933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.313 [2024-07-15 13:14:39.924997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.313 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.314 [2024-07-15 13:14:39.941792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.314 [2024-07-15 13:14:39.942047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.314 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.314 [2024-07-15 13:14:39.959232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.314 [2024-07-15 13:14:39.959494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.314 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.314 [2024-07-15 13:14:39.976487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.314 [2024-07-15 13:14:39.976720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.314 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.314 [2024-07-15 13:14:39.992220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.314 [2024-07-15 13:14:39.992480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.314 2024/07/15 13:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.314 [2024-07-15 13:14:40.003005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.314 [2024-07-15 13:14:40.003269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.314 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.314 [2024-07-15 13:14:40.019259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.314 [2024-07-15 13:14:40.019525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.314 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.314 [2024-07-15 13:14:40.034684] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.314 [2024-07-15 13:14:40.034748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.314 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.571 [2024-07-15 13:14:40.051587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.571 [2024-07-15 13:14:40.051664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.571 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.571 [2024-07-15 13:14:40.067292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.571 [2024-07-15 13:14:40.067568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.571 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.571 [2024-07-15 13:14:40.084599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.571 [2024-07-15 13:14:40.084937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.571 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.571 [2024-07-15 13:14:40.101122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.571 [2024-07-15 13:14:40.101425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.571 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.571 [2024-07-15 13:14:40.118358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.571 [2024-07-15 13:14:40.118604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.571 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.571 [2024-07-15 13:14:40.135918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.571 [2024-07-15 13:14:40.136188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.571 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.571 [2024-07-15 13:14:40.152475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.571 [2024-07-15 13:14:40.152749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.571 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.571 [2024-07-15 13:14:40.168725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.571 [2024-07-15 13:14:40.168976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.572 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.572 [2024-07-15 13:14:40.186245] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.572 [2024-07-15 13:14:40.186516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.572 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.572 [2024-07-15 13:14:40.203768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.572 [2024-07-15 13:14:40.204018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.572 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.572 [2024-07-15 13:14:40.219778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.572 [2024-07-15 13:14:40.220031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.572 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.572 [2024-07-15 13:14:40.231102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.572 [2024-07-15 13:14:40.231160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.572 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.572 [2024-07-15 13:14:40.245595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.572 [2024-07-15 13:14:40.245655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.572 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.572 [2024-07-15 13:14:40.261938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.572 [2024-07-15 13:14:40.262175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.572 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.572 [2024-07-15 13:14:40.279060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.572 [2024-07-15 13:14:40.279315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.572 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.572 [2024-07-15 13:14:40.295483] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.572 [2024-07-15 13:14:40.295730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.572 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.829 [2024-07-15 13:14:40.312010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.829 [2024-07-15 13:14:40.312310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.829 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.829 [2024-07-15 13:14:40.328591] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.829 [2024-07-15 13:14:40.328843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.829 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.829 [2024-07-15 13:14:40.345730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.829 [2024-07-15 13:14:40.346037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.829 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.829 [2024-07-15 13:14:40.361828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.829 [2024-07-15 13:14:40.362119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.829 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.829 [2024-07-15 13:14:40.374797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.829 [2024-07-15 13:14:40.374878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.829 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.829 [2024-07-15 13:14:40.391911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.829 [2024-07-15 13:14:40.392192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.829 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.829 [2024-07-15 13:14:40.407782] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.829 [2024-07-15 13:14:40.408039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.830 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.830 [2024-07-15 13:14:40.425395] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.830 [2024-07-15 13:14:40.425605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.830 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.830 [2024-07-15 13:14:40.442615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.830 [2024-07-15 13:14:40.442853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.830 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.830 [2024-07-15 13:14:40.459013] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.830 [2024-07-15 13:14:40.459241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.830 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.830 [2024-07-15 13:14:40.476423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.830 [2024-07-15 13:14:40.476678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.830 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.830 [2024-07-15 13:14:40.492583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.830 [2024-07-15 13:14:40.492649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.830 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.830 [2024-07-15 13:14:40.509968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.830 [2024-07-15 13:14:40.510199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.830 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.830 [2024-07-15 13:14:40.527053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.830 [2024-07-15 13:14:40.527370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.830 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.830 [2024-07-15 13:14:40.544753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.830 [2024-07-15 13:14:40.544992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.830 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.830 [2024-07-15 13:14:40.560839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.830 [2024-07-15 13:14:40.561099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.830 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.088 [2024-07-15 13:14:40.577752] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.089 [2024-07-15 13:14:40.578006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.089 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.089 [2024-07-15 13:14:40.594670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.089 [2024-07-15 13:14:40.594950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.089 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.089 [2024-07-15 13:14:40.611787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.089 [2024-07-15 13:14:40.611854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.089 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.089 [2024-07-15 13:14:40.627309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.089 [2024-07-15 13:14:40.627370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.089 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.089 [2024-07-15 13:14:40.642835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.089 [2024-07-15 13:14:40.643061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.089 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.089 [2024-07-15 13:14:40.660309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.089 [2024-07-15 13:14:40.660582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.089 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.089 [2024-07-15 13:14:40.676778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.089 [2024-07-15 13:14:40.677012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.089 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.089 [2024-07-15 13:14:40.694531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.089 [2024-07-15 13:14:40.694822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.089 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.089 [2024-07-15 13:14:40.710970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.089 [2024-07-15 13:14:40.711264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.089 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.089 [2024-07-15 13:14:40.727218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.089 [2024-07-15 13:14:40.727483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.089 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.089 [2024-07-15 13:14:40.742573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.089 [2024-07-15 13:14:40.742633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.089 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.089 [2024-07-15 13:14:40.760303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.089 [2024-07-15 13:14:40.760575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.089 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.089 [2024-07-15 13:14:40.774828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.089 [2024-07-15 13:14:40.775094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.089 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.089 [2024-07-15 13:14:40.792044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.089 [2024-07-15 13:14:40.792337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.089 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.089 [2024-07-15 13:14:40.809273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.089 [2024-07-15 13:14:40.809540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.089 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.089 [2024-07-15 13:14:40.825974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.089 [2024-07-15 13:14:40.826250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.347 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.347 [2024-07-15 13:14:40.842544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.347 [2024-07-15 13:14:40.842777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.347 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.347 [2024-07-15 13:14:40.858918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.347 [2024-07-15 13:14:40.858972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.347 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.347 [2024-07-15 13:14:40.869000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.347 [2024-07-15 13:14:40.869047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.347 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.347 [2024-07-15 13:14:40.883685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.347 [2024-07-15 13:14:40.883740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.347 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.347 [2024-07-15 13:14:40.899569] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.347 [2024-07-15 13:14:40.899804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.347 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.347 [2024-07-15 13:14:40.916714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.347 [2024-07-15 13:14:40.916945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.347 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.347 [2024-07-15 13:14:40.933692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.347 [2024-07-15 13:14:40.933942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.347 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.347 [2024-07-15 13:14:40.950470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.347 [2024-07-15 13:14:40.950720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.347 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.347 [2024-07-15 13:14:40.967182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.347 [2024-07-15 13:14:40.967437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.347 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.347 [2024-07-15 13:14:40.983905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.347 [2024-07-15 13:14:40.984146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.347 2024/07/15 13:14:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.347 [2024-07-15 13:14:41.000355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.347 [2024-07-15 13:14:41.000578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.347 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.347 [2024-07-15 13:14:41.017322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.347 [2024-07-15 13:14:41.017382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.347 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.347 [2024-07-15 13:14:41.033290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.347 [2024-07-15 13:14:41.033350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.348 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.348 [2024-07-15 13:14:41.043769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.348 [2024-07-15 13:14:41.043824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.348 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.348 [2024-07-15 13:14:41.059894] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.348 [2024-07-15 13:14:41.060160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.348 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.348 [2024-07-15 13:14:41.077029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.348 [2024-07-15 13:14:41.077312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.348 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.611 [2024-07-15 13:14:41.094960] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.611 [2024-07-15 13:14:41.095240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.612 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.612 [2024-07-15 13:14:41.111046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.612 [2024-07-15 13:14:41.111305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.612 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.612 [2024-07-15 13:14:41.122401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.612 [2024-07-15 13:14:41.122466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.612 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.612 [2024-07-15 13:14:41.137548] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.612 [2024-07-15 13:14:41.137606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.612 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.612 [2024-07-15 13:14:41.154340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.612 [2024-07-15 13:14:41.154595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.612 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.612 [2024-07-15 13:14:41.171265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.612 [2024-07-15 13:14:41.171611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.612 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.612 [2024-07-15 13:14:41.187413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.612 [2024-07-15 13:14:41.187711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.612 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.612 [2024-07-15 13:14:41.198751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.612 [2024-07-15 13:14:41.198995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.612 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.612 [2024-07-15 13:14:41.214558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.612 [2024-07-15 13:14:41.214811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.612 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.612 [2024-07-15 13:14:41.231240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.612 [2024-07-15 13:14:41.231527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.612 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.612 [2024-07-15 13:14:41.246889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.612 [2024-07-15 13:14:41.246962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.612 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.612 [2024-07-15 13:14:41.263010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.612 [2024-07-15 13:14:41.263071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.612 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.612 [2024-07-15 13:14:41.273748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.612 [2024-07-15 13:14:41.273798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.612 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.612 [2024-07-15 13:14:41.288616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.612 [2024-07-15 13:14:41.288872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.612 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.612 [2024-07-15 13:14:41.305239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.612 [2024-07-15 13:14:41.305534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.612 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.612 [2024-07-15 13:14:41.322658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.612 [2024-07-15 13:14:41.322954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.612 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.612 [2024-07-15 13:14:41.338347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.612 [2024-07-15 13:14:41.338603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.612 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.916 [2024-07-15 13:14:41.353717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.916 [2024-07-15 13:14:41.353775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.916 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.916 [2024-07-15 13:14:41.370851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.916 [2024-07-15 13:14:41.371133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.916 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.916 [2024-07-15 13:14:41.387105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.916 [2024-07-15 13:14:41.387398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.916 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.916 [2024-07-15 13:14:41.404189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.916 [2024-07-15 13:14:41.404475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.916 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.916 [2024-07-15 13:14:41.420664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.916 [2024-07-15 13:14:41.420734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.916 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.916 [2024-07-15 13:14:41.437582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.916 [2024-07-15 13:14:41.437647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.916 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.916 [2024-07-15 13:14:41.455234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.916 [2024-07-15 13:14:41.455487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.916 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.916 [2024-07-15 13:14:41.472541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.916 [2024-07-15 13:14:41.472838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.916 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.916 [2024-07-15 13:14:41.490010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.916 [2024-07-15 13:14:41.490084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.916 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.916 [2024-07-15 13:14:41.505433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.916 [2024-07-15 13:14:41.505500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.916 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.916 [2024-07-15 13:14:41.521974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.916 [2024-07-15 13:14:41.522191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.916 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.916 [2024-07-15 13:14:41.541267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.916 [2024-07-15 13:14:41.541585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.916 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.916 [2024-07-15 13:14:41.557692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.916 [2024-07-15 13:14:41.557914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.916 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.916 [2024-07-15 13:14:41.572314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.916 [2024-07-15 13:14:41.572585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.916 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.916 [2024-07-15 13:14:41.588649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.916 [2024-07-15 13:14:41.588942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.916 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.916 [2024-07-15 13:14:41.599531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.916 [2024-07-15 13:14:41.599593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.916 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.916 [2024-07-15 13:14:41.614599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.916 [2024-07-15 13:14:41.614672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.916 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.916 [2024-07-15 13:14:41.630837] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.916 [2024-07-15 13:14:41.631089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.916 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.916 [2024-07-15 13:14:41.646078] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.916 [2024-07-15 13:14:41.646363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.916 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.174 [2024-07-15 13:14:41.662558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.174 [2024-07-15 13:14:41.662776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.174 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.174 [2024-07-15 13:14:41.680455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.174 [2024-07-15 13:14:41.680668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.174 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.174 [2024-07-15 13:14:41.696107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.174 [2024-07-15 13:14:41.696404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.174 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.174 [2024-07-15 13:14:41.707184] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.174 [2024-07-15 13:14:41.707384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.174 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.174 [2024-07-15 13:14:41.722350] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.174 [2024-07-15 13:14:41.722616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.174 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.174 [2024-07-15 13:14:41.738338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.174 [2024-07-15 13:14:41.738401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.174 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.174 [2024-07-15 13:14:41.754955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.174 [2024-07-15 13:14:41.755216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.174 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.174 [2024-07-15 13:14:41.771798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.174 [2024-07-15 13:14:41.772026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.174 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.174 [2024-07-15 13:14:41.789147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.174 [2024-07-15 13:14:41.789400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.174 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.174 [2024-07-15 13:14:41.805280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.174 [2024-07-15 13:14:41.805502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.174 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.174 [2024-07-15 13:14:41.821239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.174 [2024-07-15 13:14:41.821417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.174 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.174 [2024-07-15 13:14:41.831718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.174 [2024-07-15 13:14:41.831895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.174 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.174 [2024-07-15 13:14:41.844870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.174 [2024-07-15 13:14:41.845046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.174 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.174 [2024-07-15 13:14:41.856747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.174 [2024-07-15 13:14:41.856915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.174 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.174 [2024-07-15 13:14:41.872405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.174 [2024-07-15 13:14:41.872447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.174 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.174 [2024-07-15 13:14:41.889430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.174 [2024-07-15 13:14:41.889476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.174 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.174 [2024-07-15 13:14:41.905453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.174 [2024-07-15 13:14:41.905496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.174 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.432 [2024-07-15 13:14:41.916227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.432 [2024-07-15 13:14:41.916263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.432 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.432 [2024-07-15 13:14:41.928172] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.432 [2024-07-15 13:14:41.928365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.432 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.432 [2024-07-15 13:14:41.939968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.432 [2024-07-15 13:14:41.940135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.432 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.432 [2024-07-15 13:14:41.955669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.432 [2024-07-15 13:14:41.955836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.432 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.432 [2024-07-15 13:14:41.971618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.432 [2024-07-15 13:14:41.971789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.432 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.432 [2024-07-15 13:14:41.982771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.432 [2024-07-15 13:14:41.982938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.432 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.432 [2024-07-15 13:14:41.994385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.432 [2024-07-15 13:14:41.994550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.432 2024/07/15 13:14:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.432 [2024-07-15 13:14:42.010241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.432 [2024-07-15 13:14:42.010435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.432 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.432 [2024-07-15 13:14:42.026759] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.432 [2024-07-15 13:14:42.026942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.432 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.432 [2024-07-15 13:14:42.042793] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.432 [2024-07-15 13:14:42.042844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.432 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.432 [2024-07-15 13:14:42.053922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.432 [2024-07-15 13:14:42.053965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.432 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.432 [2024-07-15 13:14:42.069372] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.432 [2024-07-15 13:14:42.069560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.432 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.432 [2024-07-15 13:14:42.080193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.433 [2024-07-15 13:14:42.080393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.433 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.433 [2024-07-15 13:14:42.093172] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.433 [2024-07-15 13:14:42.093361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.433 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.433 [2024-07-15 13:14:42.105720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.433 [2024-07-15 13:14:42.105906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.433 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.433 [2024-07-15 13:14:42.121350] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.433 [2024-07-15 13:14:42.121530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.433 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.433 [2024-07-15 13:14:42.138493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.433 [2024-07-15 13:14:42.138547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.433 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.433 [2024-07-15 13:14:42.155370] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.433 [2024-07-15 13:14:42.155413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.433 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.692 [2024-07-15 13:14:42.172026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.692 [2024-07-15 13:14:42.172069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.692 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.692 [2024-07-15 13:14:42.188327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.692 [2024-07-15 13:14:42.188504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.692 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.692 [2024-07-15 13:14:42.204132] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.692 [2024-07-15 13:14:42.204322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.692 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.692 [2024-07-15 13:14:42.215561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.692 [2024-07-15 13:14:42.215726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.692 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.692 [2024-07-15 13:14:42.227632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.692 [2024-07-15 13:14:42.227798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.692 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.692 [2024-07-15 13:14:42.239474] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.692 [2024-07-15 13:14:42.239637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.692 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.692 [2024-07-15 13:14:42.256176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.692 [2024-07-15 13:14:42.256364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.692 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.692 [2024-07-15 13:14:42.273766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.692 [2024-07-15 13:14:42.273931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.692 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.692 [2024-07-15 13:14:42.290740] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.692 [2024-07-15 13:14:42.290905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.692 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.692 [2024-07-15 13:14:42.307067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.692 [2024-07-15 13:14:42.307110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.692 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.692 [2024-07-15 13:14:42.323887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.692 [2024-07-15 13:14:42.323931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.692 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.692 [2024-07-15 13:14:42.335035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.692 [2024-07-15 13:14:42.335081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.692 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.692 [2024-07-15 13:14:42.350341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.692 [2024-07-15 13:14:42.350518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.692 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.692 [2024-07-15 13:14:42.367256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.692 [2024-07-15 13:14:42.367425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.692 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.692 [2024-07-15 13:14:42.383994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.692 [2024-07-15 13:14:42.384169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.692 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.692 [2024-07-15 13:14:42.400758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.692 [2024-07-15 13:14:42.400936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.692 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.692 [2024-07-15 13:14:42.418562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.692 [2024-07-15 13:14:42.418818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.692 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.951 [2024-07-15 13:14:42.434271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.951 [2024-07-15 13:14:42.434465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.951 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.951 [2024-07-15 13:14:42.444968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.951 [2024-07-15 13:14:42.445012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.951 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.951 [2024-07-15 13:14:42.459014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.951 [2024-07-15 13:14:42.459059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.951 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.951 [2024-07-15 13:14:42.470251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.951 [2024-07-15 13:14:42.470435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.951 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.951 [2024-07-15 13:14:42.485657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.951 [2024-07-15 13:14:42.485833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.951 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.951 [2024-07-15 13:14:42.501857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.951 [2024-07-15 13:14:42.502036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.951 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.951 [2024-07-15 13:14:42.518736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.951 [2024-07-15 13:14:42.518909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.951 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.951 [2024-07-15 13:14:42.535879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.951 [2024-07-15 13:14:42.536049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.951 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.951 [2024-07-15 13:14:42.547106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.951 [2024-07-15 13:14:42.547281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.951 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.951 [2024-07-15 13:14:42.563752] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.951 [2024-07-15 13:14:42.563922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.951 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.951 [2024-07-15 13:14:42.578856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.951 [2024-07-15 13:14:42.578901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.951 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.951 [2024-07-15 13:14:42.596085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.951 [2024-07-15 13:14:42.596140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.952 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.952 [2024-07-15 13:14:42.611517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.952 [2024-07-15 13:14:42.611566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.952 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.952 [2024-07-15 13:14:42.627831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.952 [2024-07-15 13:14:42.628042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.952 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.952 [2024-07-15 13:14:42.644189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.952 [2024-07-15 13:14:42.644432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.952 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.952 [2024-07-15 13:14:42.661352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.952 [2024-07-15 13:14:42.661539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.952 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.952 [2024-07-15 13:14:42.678494] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.952 [2024-07-15 13:14:42.678694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.952 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.210 [2024-07-15 13:14:42.694358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.210 [2024-07-15 13:14:42.694562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.210 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.210 [2024-07-15 13:14:42.712470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.210 [2024-07-15 13:14:42.712678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.210 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.210 [2024-07-15 13:14:42.729679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.210 [2024-07-15 13:14:42.729735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.210 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.210 [2024-07-15 13:14:42.745402] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.210 [2024-07-15 13:14:42.745601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.210 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.210 [2024-07-15 13:14:42.756285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.210 [2024-07-15 13:14:42.756334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.210 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.210 [2024-07-15 13:14:42.771529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.210 [2024-07-15 13:14:42.771584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.210 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.210 [2024-07-15 13:14:42.788134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.210 [2024-07-15 13:14:42.788190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.210 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.210 [2024-07-15 13:14:42.803574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.210 [2024-07-15 13:14:42.803772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.210 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.210 [2024-07-15 13:14:42.815287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.210 [2024-07-15 13:14:42.815341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.210 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.210 [2024-07-15 13:14:42.830598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.210 [2024-07-15 13:14:42.830656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.210 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.210 [2024-07-15 13:14:42.847305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.210 [2024-07-15 13:14:42.847509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.210 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.210 [2024-07-15 13:14:42.862978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.210 [2024-07-15 13:14:42.863164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.210 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.210 [2024-07-15 13:14:42.880010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.210 [2024-07-15 13:14:42.880222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.210 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.210 [2024-07-15 13:14:42.897096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.210 [2024-07-15 13:14:42.897342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.210 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.210 [2024-07-15 13:14:42.913230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.210 [2024-07-15 13:14:42.913461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.210 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.210 [2024-07-15 13:14:42.930331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.210 [2024-07-15 13:14:42.930568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.210 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.210 [2024-07-15 13:14:42.946718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.210 [2024-07-15 13:14:42.946951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.467 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.467 [2024-07-15 13:14:42.963698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.467 [2024-07-15 13:14:42.963959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.467 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.467 [2024-07-15 13:14:42.979943] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.467 [2024-07-15 13:14:42.980008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.467 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.467 [2024-07-15 13:14:42.990261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.467 [2024-07-15 13:14:42.990339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.467 2024/07/15 13:14:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.467 [2024-07-15 13:14:43.006866] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.467 [2024-07-15 13:14:43.006930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.467 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.468 [2024-07-15 13:14:43.023598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.468 [2024-07-15 13:14:43.023858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.468 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.468 [2024-07-15 13:14:43.040612] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.468 [2024-07-15 13:14:43.040850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.468 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.468 [2024-07-15 13:14:43.056779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.468 [2024-07-15 13:14:43.057036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.468 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.468 [2024-07-15 13:14:43.067034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.468 [2024-07-15 13:14:43.067301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.468 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.468 [2024-07-15 13:14:43.083328] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.468 [2024-07-15 13:14:43.083590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.468 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.468 [2024-07-15 13:14:43.099513] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.468 [2024-07-15 13:14:43.099824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.468 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.468 [2024-07-15 13:14:43.116305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.468 [2024-07-15 13:14:43.116375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.468 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.468 [2024-07-15 13:14:43.134880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.468 [2024-07-15 13:14:43.134949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.468 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.468 [2024-07-15 13:14:43.149527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.468 [2024-07-15 13:14:43.149765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.468 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.468 [2024-07-15 13:14:43.166469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.468 [2024-07-15 13:14:43.166770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.468 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.468 [2024-07-15 13:14:43.182216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.468 [2024-07-15 13:14:43.182491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.468 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.468 [2024-07-15 13:14:43.192676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.468 [2024-07-15 13:14:43.192909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.468 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.725 [2024-07-15 13:14:43.208465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.725 [2024-07-15 13:14:43.208723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.725 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.725 [2024-07-15 13:14:43.224670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.725 [2024-07-15 13:14:43.224983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.725 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.725 [2024-07-15 13:14:43.241059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.725 [2024-07-15 13:14:43.241340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.725 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.725 [2024-07-15 13:14:43.252221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.726 [2024-07-15 13:14:43.252277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.726 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.726 [2024-07-15 13:14:43.267841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.726 [2024-07-15 13:14:43.267904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.726 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.726 [2024-07-15 13:14:43.285019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.726 [2024-07-15 13:14:43.285290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.726 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.726 [2024-07-15 13:14:43.301013] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.726 [2024-07-15 13:14:43.301307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.726 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.726 [2024-07-15 13:14:43.311980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.726 [2024-07-15 13:14:43.312269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.726 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.726 [2024-07-15 13:14:43.326719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.726 [2024-07-15 13:14:43.326969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.726 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.726 [2024-07-15 13:14:43.343405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.726 [2024-07-15 13:14:43.343641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.726 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.726 [2024-07-15 13:14:43.360933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.726 [2024-07-15 13:14:43.360995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.726 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.726 [2024-07-15 13:14:43.377629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.726 [2024-07-15 13:14:43.377698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.726 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.726 [2024-07-15 13:14:43.392020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.726 [2024-07-15 13:14:43.392272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.726 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.726 [2024-07-15 13:14:43.408285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.726 [2024-07-15 13:14:43.408534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.726 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.726 [2024-07-15 13:14:43.419565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.726 [2024-07-15 13:14:43.419784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.726 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.726 [2024-07-15 13:14:43.435082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.726 [2024-07-15 13:14:43.435147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.726 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.726 [2024-07-15 13:14:43.452485] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.726 [2024-07-15 13:14:43.452548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.726 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.984 [2024-07-15 13:14:43.468258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.984 [2024-07-15 13:14:43.468530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.984 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.984 [2024-07-15 13:14:43.485512] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.984 [2024-07-15 13:14:43.485793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.984 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.984 [2024-07-15 13:14:43.501926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.984 [2024-07-15 13:14:43.502193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.984 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.984 [2024-07-15 13:14:43.513680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.984 [2024-07-15 13:14:43.513930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.984 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.984 [2024-07-15 13:14:43.529012] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.984 [2024-07-15 13:14:43.529278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.984 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.984 [2024-07-15 13:14:43.546549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.984 [2024-07-15 13:14:43.546805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.984 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.984 [2024-07-15 13:14:43.558022] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.984 [2024-07-15 13:14:43.558219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.984 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.984 [2024-07-15 13:14:43.572503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.984 [2024-07-15 13:14:43.572563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.984 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.984 [2024-07-15 13:14:43.590365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.984 [2024-07-15 13:14:43.590423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.984 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.984 [2024-07-15 13:14:43.606563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.984 [2024-07-15 13:14:43.606770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.984 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.984 [2024-07-15 13:14:43.624317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.984 [2024-07-15 13:14:43.624590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.984 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.984 [2024-07-15 13:14:43.641493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.984 [2024-07-15 13:14:43.641747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.984 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.985 [2024-07-15 13:14:43.658866] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.985 [2024-07-15 13:14:43.659137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.985 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.985 [2024-07-15 13:14:43.675437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.985 [2024-07-15 13:14:43.675693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.985 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.985 [2024-07-15 13:14:43.692052] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.985 [2024-07-15 13:14:43.692359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.985 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.985 [2024-07-15 13:14:43.708632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.985 [2024-07-15 13:14:43.708924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.985 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.242 [2024-07-15 13:14:43.726818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.242 [2024-07-15 13:14:43.727070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.242 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.242 [2024-07-15 13:14:43.742527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.242 [2024-07-15 13:14:43.742790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.242 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.242 [2024-07-15 13:14:43.754677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.242 [2024-07-15 13:14:43.754914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.242 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.242 [2024-07-15 13:14:43.770441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.242 [2024-07-15 13:14:43.770711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.242 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.242 [2024-07-15 13:14:43.785923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.242 [2024-07-15 13:14:43.786181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.242 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.242 [2024-07-15 13:14:43.803035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.242 [2024-07-15 13:14:43.803099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.242 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.242 [2024-07-15 13:14:43.819250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.242 [2024-07-15 13:14:43.819323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.242 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.242 [2024-07-15 13:14:43.836066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.242 [2024-07-15 13:14:43.836341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.242 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.242 [2024-07-15 13:14:43.852104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.242 [2024-07-15 13:14:43.852395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.242 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.242 [2024-07-15 13:14:43.863880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.242 [2024-07-15 13:14:43.864120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.242 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.242 [2024-07-15 13:14:43.879399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.242 [2024-07-15 13:14:43.879679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.242 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.242 [2024-07-15 13:14:43.893991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.242 [2024-07-15 13:14:43.894228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.242 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.242 [2024-07-15 13:14:43.911766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.242 [2024-07-15 13:14:43.912023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.242 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.242 [2024-07-15 13:14:43.929113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.242 [2024-07-15 13:14:43.929436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.242 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.242 [2024-07-15 13:14:43.947634] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.242 [2024-07-15 13:14:43.947854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.242 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.242 [2024-07-15 13:14:43.964597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.242 [2024-07-15 13:14:43.964840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.242 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.500 [2024-07-15 13:14:43.981332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.500 [2024-07-15 13:14:43.981554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.500 2024/07/15 13:14:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.500 [2024-07-15 13:14:43.998179] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.500 [2024-07-15 13:14:43.998436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.500 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.500 [2024-07-15 13:14:44.015778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.500 [2024-07-15 13:14:44.016005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.500 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.500 [2024-07-15 13:14:44.033429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.500 [2024-07-15 13:14:44.033661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.500 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.500 [2024-07-15 13:14:44.049645] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.500 [2024-07-15 13:14:44.049840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.500 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.500 [2024-07-15 13:14:44.066951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.500 [2024-07-15 13:14:44.067160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.500 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.500 [2024-07-15 13:14:44.084074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.500 [2024-07-15 13:14:44.084312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.501 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.501 [2024-07-15 13:14:44.101667] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.501 [2024-07-15 13:14:44.101920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.501 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.501 [2024-07-15 13:14:44.117871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.501 [2024-07-15 13:14:44.117931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.501 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.501 [2024-07-15 13:14:44.135295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.501 [2024-07-15 13:14:44.135361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.501 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.501 [2024-07-15 13:14:44.150848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.501 [2024-07-15 13:14:44.151082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.501 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.501 [2024-07-15 13:14:44.168378] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.501 [2024-07-15 13:14:44.168649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.501 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.501 [2024-07-15 13:14:44.184432] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.501 [2024-07-15 13:14:44.184688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.501 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.501 [2024-07-15 13:14:44.202049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.501 [2024-07-15 13:14:44.202349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.501 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.501 [2024-07-15 13:14:44.218816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.501 [2024-07-15 13:14:44.219082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.501 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.501 [2024-07-15 13:14:44.235518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.501 [2024-07-15 13:14:44.235765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.759 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.759 [2024-07-15 13:14:44.251796] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.759 [2024-07-15 13:14:44.251864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.759 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.759 [2024-07-15 13:14:44.268851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.759 [2024-07-15 13:14:44.269136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.759 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.759 [2024-07-15 13:14:44.285115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.759 [2024-07-15 13:14:44.285373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.759 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.759 [2024-07-15 13:14:44.301962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.759 [2024-07-15 13:14:44.302221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.759 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.759 [2024-07-15 13:14:44.318036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.759 [2024-07-15 13:14:44.318303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.759 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.759 [2024-07-15 13:14:44.334075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.759 [2024-07-15 13:14:44.334352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.759 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.759 [2024-07-15 13:14:44.350916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.759 [2024-07-15 13:14:44.351197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.759 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.759 [2024-07-15 13:14:44.367653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.759 [2024-07-15 13:14:44.367709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.759 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.759 [2024-07-15 13:14:44.378128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.759 [2024-07-15 13:14:44.378175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.759 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.759 [2024-07-15 13:14:44.393153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.759 [2024-07-15 13:14:44.393403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.759 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.759 [2024-07-15 13:14:44.409748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.759 [2024-07-15 13:14:44.409980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.759 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.759 00:15:47.759 Latency(us) 00:15:47.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.759 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:47.759 Nvme1n1 : 5.01 10508.19 82.10 0.00 0.00 12164.81 5004.57 23473.80 00:15:47.759 =================================================================================================================== 00:15:47.759 Total : 10508.19 82.10 0.00 0.00 12164.81 5004.57 23473.80 00:15:47.759 [2024-07-15 13:14:44.421631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.759 [2024-07-15 13:14:44.421735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.759 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.759 [2024-07-15 13:14:44.433625] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.759 [2024-07-15 13:14:44.433846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.759 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.760 [2024-07-15 13:14:44.445658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.760 [2024-07-15 13:14:44.445719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.760 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.760 [2024-07-15 13:14:44.457662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.760 [2024-07-15 13:14:44.457726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.760 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.760 [2024-07-15 13:14:44.469661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.760 [2024-07-15 13:14:44.469950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.760 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.760 [2024-07-15 13:14:44.481675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.760 [2024-07-15 13:14:44.481976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.760 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.760 [2024-07-15 13:14:44.493679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.760 [2024-07-15 13:14:44.493996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.018 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.018 [2024-07-15 13:14:44.505683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.018 [2024-07-15 13:14:44.505743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.018 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.018 [2024-07-15 13:14:44.517686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.018 [2024-07-15 13:14:44.517744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.018 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.018 [2024-07-15 13:14:44.529676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.018 [2024-07-15 13:14:44.529979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.018 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.018 [2024-07-15 13:14:44.541690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.018 [2024-07-15 13:14:44.541973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.018 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.019 [2024-07-15 13:14:44.553676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.019 [2024-07-15 13:14:44.553901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.019 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.019 [2024-07-15 13:14:44.565703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.019 [2024-07-15 13:14:44.565972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.019 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.019 [2024-07-15 13:14:44.577698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.019 [2024-07-15 13:14:44.577971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.019 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.019 [2024-07-15 13:14:44.589688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.019 [2024-07-15 13:14:44.589938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.019 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.019 [2024-07-15 13:14:44.601703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.019 [2024-07-15 13:14:44.601963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.019 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.019 [2024-07-15 13:14:44.613706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.019 [2024-07-15 13:14:44.613928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.019 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.019 [2024-07-15 13:14:44.625692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.019 [2024-07-15 13:14:44.625900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.019 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.019 [2024-07-15 13:14:44.641699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.019 [2024-07-15 13:14:44.641747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.019 2024/07/15 13:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.019 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (91624) - No such process 00:15:48.019 13:14:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 91624 00:15:48.019 13:14:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:48.019 13:14:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.019 13:14:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:48.019 13:14:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.019 13:14:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:48.019 13:14:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.019 13:14:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:48.019 delay0 00:15:48.019 13:14:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.019 13:14:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:48.019 13:14:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.019 13:14:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:48.019 13:14:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.019 13:14:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:48.277 [2024-07-15 13:14:44.855577] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:54.870 Initializing NVMe Controllers 00:15:54.870 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:54.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:54.870 Initialization complete. Launching workers. 00:15:54.870 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 62 00:15:54.870 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 349, failed to submit 33 00:15:54.870 success 162, unsuccess 187, failed 0 00:15:54.870 13:14:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:54.870 13:14:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:54.871 13:14:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:54.871 13:14:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:54.871 13:14:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:54.871 13:14:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:54.871 13:14:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:54.871 13:14:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:54.871 rmmod nvme_tcp 00:15:54.871 rmmod nvme_fabrics 00:15:54.871 rmmod nvme_keyring 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 91456 ']' 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 91456 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 91456 ']' 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 91456 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91456 00:15:54.871 killing process with pid 91456 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91456' 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 91456 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 91456 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:54.871 00:15:54.871 real 0m24.729s 00:15:54.871 user 0m39.545s 00:15:54.871 sys 0m6.824s 00:15:54.871 ************************************ 00:15:54.871 END TEST nvmf_zcopy 00:15:54.871 ************************************ 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:54.871 13:14:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:54.871 13:14:51 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:54.871 13:14:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:54.871 13:14:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:54.871 13:14:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:54.871 ************************************ 00:15:54.871 START TEST nvmf_nmic 00:15:54.871 ************************************ 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:54.871 * Looking for test storage... 00:15:54.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:54.871 Cannot find device "nvmf_tgt_br" 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:54.871 Cannot find device "nvmf_tgt_br2" 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:54.871 Cannot find device "nvmf_tgt_br" 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:54.871 Cannot find device "nvmf_tgt_br2" 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:54.871 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:54.871 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:54.871 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:55.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:15:55.129 00:15:55.129 --- 10.0.0.2 ping statistics --- 00:15:55.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.129 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:55.129 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:55.129 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:15:55.129 00:15:55.129 --- 10.0.0.3 ping statistics --- 00:15:55.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.129 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:55.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:55.129 00:15:55.129 --- 10.0.0.1 ping statistics --- 00:15:55.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.129 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=91945 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 91945 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 91945 ']' 00:15:55.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:55.129 13:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:55.129 [2024-07-15 13:14:51.849672] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:55.129 [2024-07-15 13:14:51.849768] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.387 [2024-07-15 13:14:51.982746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:55.387 [2024-07-15 13:14:52.099392] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.387 [2024-07-15 13:14:52.099466] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.387 [2024-07-15 13:14:52.099484] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.387 [2024-07-15 13:14:52.099497] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.387 [2024-07-15 13:14:52.099508] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.387 [2024-07-15 13:14:52.099669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.387 [2024-07-15 13:14:52.099759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.387 [2024-07-15 13:14:52.100537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:55.387 [2024-07-15 13:14:52.100787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.322 [2024-07-15 13:14:52.872979] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.322 Malloc0 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.322 [2024-07-15 13:14:52.942396] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:56.322 test case1: single bdev can't be used in multiple subsystems 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.322 [2024-07-15 13:14:52.966203] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:56.322 [2024-07-15 13:14:52.966255] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:56.322 [2024-07-15 13:14:52.966268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.322 2024/07/15 13:14:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.322 request: 00:15:56.322 { 00:15:56.322 "method": "nvmf_subsystem_add_ns", 00:15:56.322 "params": { 00:15:56.322 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:56.322 "namespace": { 00:15:56.322 "bdev_name": "Malloc0", 00:15:56.322 "no_auto_visible": false 00:15:56.322 } 00:15:56.322 } 00:15:56.322 } 00:15:56.322 Got JSON-RPC error response 00:15:56.322 GoRPCClient: error on JSON-RPC call 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:56.322 Adding namespace failed - expected result. 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:56.322 test case2: host connect to nvmf target in multiple paths 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:56.322 [2024-07-15 13:14:52.978420] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.322 13:14:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:56.583 13:14:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:56.583 13:14:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:56.583 13:14:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:15:56.583 13:14:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:56.584 13:14:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:56.584 13:14:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:15:59.108 13:14:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:59.108 13:14:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:59.108 13:14:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:59.108 13:14:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:59.108 13:14:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:59.108 13:14:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:15:59.108 13:14:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:59.108 [global] 00:15:59.108 thread=1 00:15:59.108 invalidate=1 00:15:59.108 rw=write 00:15:59.108 time_based=1 00:15:59.108 runtime=1 00:15:59.108 ioengine=libaio 00:15:59.108 direct=1 00:15:59.108 bs=4096 00:15:59.108 iodepth=1 00:15:59.108 norandommap=0 00:15:59.108 numjobs=1 00:15:59.108 00:15:59.108 verify_dump=1 00:15:59.108 verify_backlog=512 00:15:59.108 verify_state_save=0 00:15:59.108 do_verify=1 00:15:59.108 verify=crc32c-intel 00:15:59.108 [job0] 00:15:59.108 filename=/dev/nvme0n1 00:15:59.108 Could not set queue depth (nvme0n1) 00:15:59.108 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:59.108 fio-3.35 00:15:59.108 Starting 1 thread 00:16:00.039 00:16:00.039 job0: (groupid=0, jobs=1): err= 0: pid=92050: Mon Jul 15 13:14:56 2024 00:16:00.039 read: IOPS=3464, BW=13.5MiB/s (14.2MB/s)(13.5MiB/1001msec) 00:16:00.039 slat (nsec): min=13082, max=46503, avg=15307.52, stdev=2123.76 00:16:00.039 clat (usec): min=124, max=531, avg=141.73, stdev=12.25 00:16:00.039 lat (usec): min=137, max=551, avg=157.04, stdev=12.60 00:16:00.039 clat percentiles (usec): 00:16:00.039 | 1.00th=[ 130], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:16:00.039 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 143], 00:16:00.039 | 70.00th=[ 145], 80.00th=[ 147], 90.00th=[ 151], 95.00th=[ 155], 00:16:00.039 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 269], 99.95th=[ 449], 00:16:00.039 | 99.99th=[ 529] 00:16:00.039 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:16:00.039 slat (usec): min=19, max=105, avg=22.36, stdev= 4.59 00:16:00.039 clat (usec): min=88, max=197, avg=101.31, stdev= 7.51 00:16:00.039 lat (usec): min=109, max=303, avg=123.67, stdev= 9.42 00:16:00.039 clat percentiles (usec): 00:16:00.039 | 1.00th=[ 91], 5.00th=[ 93], 10.00th=[ 94], 20.00th=[ 96], 00:16:00.039 | 30.00th=[ 98], 40.00th=[ 99], 50.00th=[ 100], 60.00th=[ 101], 00:16:00.039 | 70.00th=[ 103], 80.00th=[ 106], 90.00th=[ 111], 95.00th=[ 115], 00:16:00.039 | 99.00th=[ 131], 99.50th=[ 141], 99.90th=[ 161], 99.95th=[ 163], 00:16:00.039 | 99.99th=[ 198] 00:16:00.039 bw ( KiB/s): min=16320, max=16320, per=100.00%, avg=16320.00, stdev= 0.00, samples=1 00:16:00.039 iops : min= 4080, max= 4080, avg=4080.00, stdev= 0.00, samples=1 00:16:00.039 lat (usec) : 100=25.75%, 250=74.19%, 500=0.04%, 750=0.01% 00:16:00.039 cpu : usr=1.80%, sys=10.70%, ctx=7052, majf=0, minf=2 00:16:00.039 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.039 issued rwts: total=3468,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.039 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.039 00:16:00.039 Run status group 0 (all jobs): 00:16:00.040 READ: bw=13.5MiB/s (14.2MB/s), 13.5MiB/s-13.5MiB/s (14.2MB/s-14.2MB/s), io=13.5MiB (14.2MB), run=1001-1001msec 00:16:00.040 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:16:00.040 00:16:00.040 Disk stats (read/write): 00:16:00.040 nvme0n1: ios=3122/3269, merge=0/0, ticks=483/375, in_queue=858, util=91.58% 00:16:00.040 13:14:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:00.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:00.040 13:14:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:00.040 13:14:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:16:00.040 13:14:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:00.040 13:14:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:00.040 13:14:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:00.040 13:14:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:00.040 13:14:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:16:00.040 13:14:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:00.040 13:14:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:00.040 13:14:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:00.040 13:14:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:00.040 13:14:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:00.040 13:14:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:00.040 13:14:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:00.040 13:14:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:00.040 rmmod nvme_tcp 00:16:00.040 rmmod nvme_fabrics 00:16:00.040 rmmod nvme_keyring 00:16:00.297 13:14:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:00.297 13:14:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:00.297 13:14:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:00.297 13:14:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 91945 ']' 00:16:00.297 13:14:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 91945 00:16:00.297 13:14:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 91945 ']' 00:16:00.297 13:14:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 91945 00:16:00.297 13:14:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:16:00.297 13:14:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:00.297 13:14:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91945 00:16:00.297 killing process with pid 91945 00:16:00.297 13:14:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:00.297 13:14:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:00.297 13:14:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91945' 00:16:00.297 13:14:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 91945 00:16:00.297 13:14:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 91945 00:16:00.555 13:14:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:00.555 13:14:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:00.555 13:14:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:00.555 13:14:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:00.555 13:14:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:00.555 13:14:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.555 13:14:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:00.555 13:14:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.555 13:14:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:00.555 00:16:00.555 real 0m5.744s 00:16:00.555 user 0m19.286s 00:16:00.555 sys 0m1.389s 00:16:00.555 13:14:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:00.555 13:14:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:00.555 ************************************ 00:16:00.555 END TEST nvmf_nmic 00:16:00.555 ************************************ 00:16:00.555 13:14:57 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:00.555 13:14:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:00.555 13:14:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:00.555 13:14:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:00.555 ************************************ 00:16:00.555 START TEST nvmf_fio_target 00:16:00.555 ************************************ 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:00.555 * Looking for test storage... 00:16:00.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:00.555 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:00.556 Cannot find device "nvmf_tgt_br" 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:00.556 Cannot find device "nvmf_tgt_br2" 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:16:00.556 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:00.813 Cannot find device "nvmf_tgt_br" 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:00.813 Cannot find device "nvmf_tgt_br2" 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:00.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:00.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:00.813 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:01.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:16:01.071 00:16:01.071 --- 10.0.0.2 ping statistics --- 00:16:01.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.071 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:01.071 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:01.071 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:16:01.071 00:16:01.071 --- 10.0.0.3 ping statistics --- 00:16:01.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.071 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:01.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:16:01.071 00:16:01.071 --- 10.0.0.1 ping statistics --- 00:16:01.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.071 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=92232 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 92232 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 92232 ']' 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:01.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:01.071 13:14:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.071 [2024-07-15 13:14:57.673057] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:01.071 [2024-07-15 13:14:57.673161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.328 [2024-07-15 13:14:57.812956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:01.328 [2024-07-15 13:14:57.906769] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.328 [2024-07-15 13:14:57.906823] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.328 [2024-07-15 13:14:57.906835] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.328 [2024-07-15 13:14:57.906844] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.328 [2024-07-15 13:14:57.906852] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.328 [2024-07-15 13:14:57.906964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.328 [2024-07-15 13:14:57.907549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.328 [2024-07-15 13:14:57.908248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:01.328 [2024-07-15 13:14:57.908262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.261 13:14:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:02.261 13:14:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:16:02.261 13:14:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:02.261 13:14:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:02.261 13:14:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.261 13:14:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:02.261 13:14:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:02.261 [2024-07-15 13:14:58.928960] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:02.261 13:14:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:02.828 13:14:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:02.828 13:14:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:03.086 13:14:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:03.086 13:14:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:03.343 13:14:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:03.343 13:14:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:03.600 13:15:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:03.600 13:15:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:03.857 13:15:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:04.115 13:15:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:04.115 13:15:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:04.372 13:15:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:04.372 13:15:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:04.629 13:15:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:04.629 13:15:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:04.887 13:15:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:05.144 13:15:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:05.144 13:15:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:05.401 13:15:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:05.401 13:15:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:05.659 13:15:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:05.917 [2024-07-15 13:15:02.570035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:05.917 13:15:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:06.175 13:15:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:06.431 13:15:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:06.688 13:15:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:06.688 13:15:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:16:06.688 13:15:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:06.688 13:15:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:16:06.688 13:15:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:16:06.688 13:15:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:16:08.585 13:15:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:08.585 13:15:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:08.585 13:15:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:08.585 13:15:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:16:08.585 13:15:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:08.585 13:15:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:16:08.585 13:15:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:08.585 [global] 00:16:08.585 thread=1 00:16:08.585 invalidate=1 00:16:08.585 rw=write 00:16:08.585 time_based=1 00:16:08.585 runtime=1 00:16:08.585 ioengine=libaio 00:16:08.585 direct=1 00:16:08.585 bs=4096 00:16:08.585 iodepth=1 00:16:08.585 norandommap=0 00:16:08.585 numjobs=1 00:16:08.585 00:16:08.585 verify_dump=1 00:16:08.585 verify_backlog=512 00:16:08.585 verify_state_save=0 00:16:08.585 do_verify=1 00:16:08.585 verify=crc32c-intel 00:16:08.585 [job0] 00:16:08.585 filename=/dev/nvme0n1 00:16:08.585 [job1] 00:16:08.585 filename=/dev/nvme0n2 00:16:08.585 [job2] 00:16:08.585 filename=/dev/nvme0n3 00:16:08.585 [job3] 00:16:08.585 filename=/dev/nvme0n4 00:16:08.843 Could not set queue depth (nvme0n1) 00:16:08.843 Could not set queue depth (nvme0n2) 00:16:08.843 Could not set queue depth (nvme0n3) 00:16:08.843 Could not set queue depth (nvme0n4) 00:16:08.843 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:08.843 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:08.843 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:08.843 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:08.843 fio-3.35 00:16:08.843 Starting 4 threads 00:16:10.215 00:16:10.215 job0: (groupid=0, jobs=1): err= 0: pid=92531: Mon Jul 15 13:15:06 2024 00:16:10.215 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:10.215 slat (usec): min=14, max=332, avg=19.50, stdev= 7.68 00:16:10.215 clat (usec): min=4, max=513, avg=179.11, stdev=28.98 00:16:10.215 lat (usec): min=157, max=544, avg=198.61, stdev=28.95 00:16:10.216 clat percentiles (usec): 00:16:10.216 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:16:10.216 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 178], 00:16:10.216 | 70.00th=[ 188], 80.00th=[ 202], 90.00th=[ 215], 95.00th=[ 225], 00:16:10.216 | 99.00th=[ 255], 99.50th=[ 289], 99.90th=[ 506], 99.95th=[ 506], 00:16:10.216 | 99.99th=[ 515] 00:16:10.216 write: IOPS=2964, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1001msec); 0 zone resets 00:16:10.216 slat (usec): min=18, max=107, avg=27.17, stdev= 6.03 00:16:10.216 clat (usec): min=101, max=709, avg=134.58, stdev=24.97 00:16:10.216 lat (usec): min=126, max=753, avg=161.75, stdev=25.43 00:16:10.216 clat percentiles (usec): 00:16:10.216 | 1.00th=[ 108], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 121], 00:16:10.216 | 30.00th=[ 124], 40.00th=[ 127], 50.00th=[ 131], 60.00th=[ 135], 00:16:10.216 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 163], 00:16:10.216 | 99.00th=[ 192], 99.50th=[ 245], 99.90th=[ 453], 99.95th=[ 537], 00:16:10.216 | 99.99th=[ 709] 00:16:10.216 bw ( KiB/s): min=12288, max=12288, per=31.96%, avg=12288.00, stdev= 0.00, samples=1 00:16:10.216 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:10.216 lat (usec) : 10=0.02%, 100=0.02%, 250=99.13%, 500=0.74%, 750=0.09% 00:16:10.216 cpu : usr=2.60%, sys=9.40%, ctx=5528, majf=0, minf=11 00:16:10.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.216 issued rwts: total=2560,2967,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.216 job1: (groupid=0, jobs=1): err= 0: pid=92532: Mon Jul 15 13:15:06 2024 00:16:10.216 read: IOPS=1559, BW=6238KiB/s (6387kB/s)(6244KiB/1001msec) 00:16:10.216 slat (nsec): min=18002, max=70512, avg=27190.89, stdev=5857.13 00:16:10.216 clat (usec): min=153, max=4049, avg=280.04, stdev=99.81 00:16:10.216 lat (usec): min=176, max=4082, avg=307.24, stdev=99.91 00:16:10.216 clat percentiles (usec): 00:16:10.216 | 1.00th=[ 182], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 265], 00:16:10.216 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:16:10.216 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 310], 00:16:10.216 | 99.00th=[ 343], 99.50th=[ 396], 99.90th=[ 914], 99.95th=[ 4047], 00:16:10.216 | 99.99th=[ 4047] 00:16:10.216 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:10.216 slat (usec): min=19, max=109, avg=35.88, stdev= 7.55 00:16:10.216 clat (usec): min=113, max=7197, avg=213.18, stdev=160.46 00:16:10.216 lat (usec): min=149, max=7245, avg=249.07, stdev=160.49 00:16:10.216 clat percentiles (usec): 00:16:10.216 | 1.00th=[ 163], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 190], 00:16:10.216 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 210], 00:16:10.216 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 245], 95.00th=[ 269], 00:16:10.216 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 1205], 99.95th=[ 1336], 00:16:10.216 | 99.99th=[ 7177] 00:16:10.216 bw ( KiB/s): min= 8192, max= 8192, per=21.30%, avg=8192.00, stdev= 0.00, samples=1 00:16:10.216 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:10.216 lat (usec) : 250=53.62%, 500=46.19%, 750=0.06%, 1000=0.03% 00:16:10.216 lat (msec) : 2=0.06%, 10=0.06% 00:16:10.216 cpu : usr=1.70%, sys=9.40%, ctx=3627, majf=0, minf=10 00:16:10.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.216 issued rwts: total=1561,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.216 job2: (groupid=0, jobs=1): err= 0: pid=92533: Mon Jul 15 13:15:06 2024 00:16:10.216 read: IOPS=2503, BW=9.78MiB/s (10.3MB/s)(9.79MiB/1001msec) 00:16:10.216 slat (nsec): min=13067, max=41585, avg=16700.92, stdev=2952.25 00:16:10.216 clat (usec): min=149, max=1671, avg=202.49, stdev=43.06 00:16:10.216 lat (usec): min=165, max=1686, avg=219.19, stdev=42.84 00:16:10.216 clat percentiles (usec): 00:16:10.216 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:16:10.216 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 206], 00:16:10.216 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[ 253], 00:16:10.216 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 404], 99.95th=[ 553], 00:16:10.216 | 99.99th=[ 1680] 00:16:10.216 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:10.216 slat (usec): min=18, max=104, avg=23.55, stdev= 4.66 00:16:10.216 clat (usec): min=109, max=2044, avg=149.03, stdev=46.42 00:16:10.216 lat (usec): min=133, max=2068, avg=172.58, stdev=46.68 00:16:10.216 clat percentiles (usec): 00:16:10.216 | 1.00th=[ 119], 5.00th=[ 123], 10.00th=[ 127], 20.00th=[ 131], 00:16:10.216 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:16:10.216 | 70.00th=[ 159], 80.00th=[ 167], 90.00th=[ 178], 95.00th=[ 186], 00:16:10.216 | 99.00th=[ 202], 99.50th=[ 210], 99.90th=[ 717], 99.95th=[ 848], 00:16:10.216 | 99.99th=[ 2040] 00:16:10.216 bw ( KiB/s): min=12288, max=12288, per=31.96%, avg=12288.00, stdev= 0.00, samples=1 00:16:10.216 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:10.216 lat (usec) : 250=96.84%, 500=3.06%, 750=0.04%, 1000=0.02% 00:16:10.216 lat (msec) : 2=0.02%, 4=0.02% 00:16:10.216 cpu : usr=2.00%, sys=7.50%, ctx=5068, majf=0, minf=3 00:16:10.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.216 issued rwts: total=2506,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.216 job3: (groupid=0, jobs=1): err= 0: pid=92534: Mon Jul 15 13:15:06 2024 00:16:10.216 read: IOPS=1612, BW=6450KiB/s (6604kB/s)(6456KiB/1001msec) 00:16:10.216 slat (usec): min=16, max=221, avg=26.20, stdev= 8.02 00:16:10.216 clat (usec): min=154, max=2859, avg=279.85, stdev=71.30 00:16:10.216 lat (usec): min=173, max=2889, avg=306.05, stdev=71.42 00:16:10.216 clat percentiles (usec): 00:16:10.216 | 1.00th=[ 172], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 262], 00:16:10.216 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:16:10.216 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 310], 00:16:10.216 | 99.00th=[ 343], 99.50th=[ 388], 99.90th=[ 824], 99.95th=[ 2868], 00:16:10.216 | 99.99th=[ 2868] 00:16:10.216 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:10.216 slat (usec): min=24, max=119, avg=35.90, stdev= 7.21 00:16:10.216 clat (usec): min=118, max=345, avg=206.01, stdev=27.30 00:16:10.216 lat (usec): min=155, max=423, avg=241.91, stdev=26.18 00:16:10.216 clat percentiles (usec): 00:16:10.216 | 1.00th=[ 133], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 188], 00:16:10.216 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:16:10.216 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 237], 95.00th=[ 258], 00:16:10.216 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 318], 99.95th=[ 322], 00:16:10.216 | 99.99th=[ 347] 00:16:10.216 bw ( KiB/s): min= 8192, max= 8192, per=21.30%, avg=8192.00, stdev= 0.00, samples=1 00:16:10.216 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:10.216 lat (usec) : 250=54.89%, 500=44.98%, 750=0.08%, 1000=0.03% 00:16:10.216 lat (msec) : 4=0.03% 00:16:10.216 cpu : usr=2.30%, sys=9.00%, ctx=3686, majf=0, minf=11 00:16:10.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.216 issued rwts: total=1614,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.216 00:16:10.216 Run status group 0 (all jobs): 00:16:10.216 READ: bw=32.2MiB/s (33.7MB/s), 6238KiB/s-9.99MiB/s (6387kB/s-10.5MB/s), io=32.2MiB (33.8MB), run=1001-1001msec 00:16:10.216 WRITE: bw=37.6MiB/s (39.4MB/s), 8184KiB/s-11.6MiB/s (8380kB/s-12.1MB/s), io=37.6MiB (39.4MB), run=1001-1001msec 00:16:10.216 00:16:10.216 Disk stats (read/write): 00:16:10.216 nvme0n1: ios=2300/2560, merge=0/0, ticks=419/366, in_queue=785, util=87.68% 00:16:10.216 nvme0n2: ios=1555/1536, merge=0/0, ticks=461/343, in_queue=804, util=87.85% 00:16:10.216 nvme0n3: ios=2048/2463, merge=0/0, ticks=408/396, in_queue=804, util=89.26% 00:16:10.216 nvme0n4: ios=1536/1571, merge=0/0, ticks=432/351, in_queue=783, util=89.60% 00:16:10.216 13:15:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:10.216 [global] 00:16:10.216 thread=1 00:16:10.216 invalidate=1 00:16:10.216 rw=randwrite 00:16:10.216 time_based=1 00:16:10.216 runtime=1 00:16:10.216 ioengine=libaio 00:16:10.216 direct=1 00:16:10.216 bs=4096 00:16:10.216 iodepth=1 00:16:10.216 norandommap=0 00:16:10.216 numjobs=1 00:16:10.216 00:16:10.216 verify_dump=1 00:16:10.216 verify_backlog=512 00:16:10.216 verify_state_save=0 00:16:10.216 do_verify=1 00:16:10.216 verify=crc32c-intel 00:16:10.216 [job0] 00:16:10.216 filename=/dev/nvme0n1 00:16:10.216 [job1] 00:16:10.216 filename=/dev/nvme0n2 00:16:10.216 [job2] 00:16:10.216 filename=/dev/nvme0n3 00:16:10.216 [job3] 00:16:10.216 filename=/dev/nvme0n4 00:16:10.216 Could not set queue depth (nvme0n1) 00:16:10.216 Could not set queue depth (nvme0n2) 00:16:10.216 Could not set queue depth (nvme0n3) 00:16:10.216 Could not set queue depth (nvme0n4) 00:16:10.216 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.216 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.216 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.216 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.216 fio-3.35 00:16:10.216 Starting 4 threads 00:16:11.587 00:16:11.587 job0: (groupid=0, jobs=1): err= 0: pid=92587: Mon Jul 15 13:15:07 2024 00:16:11.587 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:11.587 slat (nsec): min=19249, max=87695, avg=38004.20, stdev=11066.69 00:16:11.587 clat (usec): min=241, max=1162, avg=478.21, stdev=69.65 00:16:11.587 lat (usec): min=279, max=1201, avg=516.21, stdev=72.65 00:16:11.587 clat percentiles (usec): 00:16:11.587 | 1.00th=[ 375], 5.00th=[ 404], 10.00th=[ 420], 20.00th=[ 433], 00:16:11.587 | 30.00th=[ 445], 40.00th=[ 457], 50.00th=[ 469], 60.00th=[ 478], 00:16:11.587 | 70.00th=[ 494], 80.00th=[ 510], 90.00th=[ 545], 95.00th=[ 578], 00:16:11.587 | 99.00th=[ 742], 99.50th=[ 857], 99.90th=[ 1074], 99.95th=[ 1156], 00:16:11.587 | 99.99th=[ 1156] 00:16:11.587 write: IOPS=1123, BW=4496KiB/s (4603kB/s)(4500KiB/1001msec); 0 zone resets 00:16:11.587 slat (usec): min=23, max=121, avg=47.58, stdev=10.34 00:16:11.587 clat (usec): min=172, max=1297, avg=363.20, stdev=71.68 00:16:11.587 lat (usec): min=228, max=1363, avg=410.78, stdev=71.12 00:16:11.587 clat percentiles (usec): 00:16:11.587 | 1.00th=[ 215], 5.00th=[ 269], 10.00th=[ 285], 20.00th=[ 314], 00:16:11.587 | 30.00th=[ 338], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 375], 00:16:11.587 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 420], 95.00th=[ 441], 00:16:11.587 | 99.00th=[ 570], 99.50th=[ 758], 99.90th=[ 996], 99.95th=[ 1303], 00:16:11.587 | 99.99th=[ 1303] 00:16:11.587 bw ( KiB/s): min= 4096, max= 4096, per=14.06%, avg=4096.00, stdev= 0.00, samples=1 00:16:11.587 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:11.587 lat (usec) : 250=1.86%, 500=85.44%, 750=12.01%, 1000=0.56% 00:16:11.587 lat (msec) : 2=0.14% 00:16:11.587 cpu : usr=2.00%, sys=7.00%, ctx=2150, majf=0, minf=15 00:16:11.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:11.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.588 issued rwts: total=1024,1125,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:11.588 job1: (groupid=0, jobs=1): err= 0: pid=92588: Mon Jul 15 13:15:07 2024 00:16:11.588 read: IOPS=2186, BW=8747KiB/s (8957kB/s)(8756KiB/1001msec) 00:16:11.588 slat (nsec): min=13808, max=53826, avg=17244.69, stdev=3775.51 00:16:11.588 clat (usec): min=138, max=2029, avg=211.11, stdev=47.54 00:16:11.588 lat (usec): min=153, max=2048, avg=228.35, stdev=48.06 00:16:11.588 clat percentiles (usec): 00:16:11.588 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 184], 00:16:11.588 | 30.00th=[ 194], 40.00th=[ 202], 50.00th=[ 210], 60.00th=[ 219], 00:16:11.588 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 255], 00:16:11.588 | 99.00th=[ 273], 99.50th=[ 289], 99.90th=[ 306], 99.95th=[ 318], 00:16:11.588 | 99.99th=[ 2024] 00:16:11.588 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:11.588 slat (usec): min=19, max=109, avg=25.74, stdev= 6.78 00:16:11.588 clat (usec): min=103, max=629, avg=166.06, stdev=28.89 00:16:11.588 lat (usec): min=125, max=665, avg=191.80, stdev=30.71 00:16:11.588 clat percentiles (usec): 00:16:11.588 | 1.00th=[ 114], 5.00th=[ 123], 10.00th=[ 130], 20.00th=[ 141], 00:16:11.588 | 30.00th=[ 151], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 174], 00:16:11.588 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 202], 95.00th=[ 212], 00:16:11.588 | 99.00th=[ 229], 99.50th=[ 241], 99.90th=[ 281], 99.95th=[ 334], 00:16:11.588 | 99.99th=[ 627] 00:16:11.588 bw ( KiB/s): min=10184, max=10184, per=34.96%, avg=10184.00, stdev= 0.00, samples=1 00:16:11.588 iops : min= 2546, max= 2546, avg=2546.00, stdev= 0.00, samples=1 00:16:11.588 lat (usec) : 250=96.50%, 500=3.45%, 750=0.02% 00:16:11.588 lat (msec) : 4=0.02% 00:16:11.588 cpu : usr=1.90%, sys=7.80%, ctx=4749, majf=0, minf=5 00:16:11.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:11.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.588 issued rwts: total=2189,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:11.588 job2: (groupid=0, jobs=1): err= 0: pid=92589: Mon Jul 15 13:15:07 2024 00:16:11.588 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:11.588 slat (nsec): min=20117, max=99069, avg=39226.03, stdev=12066.67 00:16:11.588 clat (usec): min=269, max=1441, avg=469.21, stdev=78.20 00:16:11.588 lat (usec): min=293, max=1471, avg=508.44, stdev=77.77 00:16:11.588 clat percentiles (usec): 00:16:11.588 | 1.00th=[ 314], 5.00th=[ 388], 10.00th=[ 404], 20.00th=[ 420], 00:16:11.588 | 30.00th=[ 437], 40.00th=[ 449], 50.00th=[ 461], 60.00th=[ 474], 00:16:11.588 | 70.00th=[ 490], 80.00th=[ 510], 90.00th=[ 537], 95.00th=[ 562], 00:16:11.588 | 99.00th=[ 660], 99.50th=[ 955], 99.90th=[ 1188], 99.95th=[ 1434], 00:16:11.588 | 99.99th=[ 1434] 00:16:11.588 write: IOPS=1131, BW=4527KiB/s (4636kB/s)(4532KiB/1001msec); 0 zone resets 00:16:11.588 slat (usec): min=27, max=129, avg=48.54, stdev=11.49 00:16:11.588 clat (usec): min=202, max=3233, avg=366.29, stdev=119.98 00:16:11.588 lat (usec): min=245, max=3282, avg=414.82, stdev=119.42 00:16:11.588 clat percentiles (usec): 00:16:11.588 | 1.00th=[ 233], 5.00th=[ 269], 10.00th=[ 285], 20.00th=[ 306], 00:16:11.588 | 30.00th=[ 334], 40.00th=[ 355], 50.00th=[ 367], 60.00th=[ 375], 00:16:11.588 | 70.00th=[ 388], 80.00th=[ 404], 90.00th=[ 424], 95.00th=[ 457], 00:16:11.588 | 99.00th=[ 586], 99.50th=[ 807], 99.90th=[ 1795], 99.95th=[ 3228], 00:16:11.588 | 99.99th=[ 3228] 00:16:11.588 bw ( KiB/s): min= 4096, max= 4096, per=14.06%, avg=4096.00, stdev= 0.00, samples=1 00:16:11.588 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:11.588 lat (usec) : 250=1.25%, 500=86.23%, 750=11.82%, 1000=0.28% 00:16:11.588 lat (msec) : 2=0.37%, 4=0.05% 00:16:11.588 cpu : usr=1.80%, sys=7.40%, ctx=2158, majf=0, minf=15 00:16:11.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:11.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.588 issued rwts: total=1024,1133,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:11.588 job3: (groupid=0, jobs=1): err= 0: pid=92590: Mon Jul 15 13:15:07 2024 00:16:11.588 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:11.588 slat (usec): min=12, max=106, avg=17.27, stdev= 4.64 00:16:11.588 clat (usec): min=151, max=1785, avg=223.04, stdev=44.24 00:16:11.588 lat (usec): min=171, max=1802, avg=240.31, stdev=44.47 00:16:11.588 clat percentiles (usec): 00:16:11.588 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 198], 00:16:11.588 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 229], 00:16:11.588 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 258], 95.00th=[ 269], 00:16:11.588 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 367], 99.95th=[ 379], 00:16:11.588 | 99.99th=[ 1795] 00:16:11.588 write: IOPS=2469, BW=9878KiB/s (10.1MB/s)(9888KiB/1001msec); 0 zone resets 00:16:11.588 slat (usec): min=19, max=111, avg=25.11, stdev= 6.89 00:16:11.588 clat (usec): min=114, max=897, avg=176.79, stdev=32.64 00:16:11.588 lat (usec): min=134, max=950, avg=201.90, stdev=34.53 00:16:11.588 clat percentiles (usec): 00:16:11.588 | 1.00th=[ 125], 5.00th=[ 137], 10.00th=[ 145], 20.00th=[ 153], 00:16:11.588 | 30.00th=[ 161], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 182], 00:16:11.588 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 219], 00:16:11.588 | 99.00th=[ 255], 99.50th=[ 273], 99.90th=[ 478], 99.95th=[ 660], 00:16:11.588 | 99.99th=[ 898] 00:16:11.588 bw ( KiB/s): min= 9376, max= 9376, per=32.19%, avg=9376.00, stdev= 0.00, samples=1 00:16:11.588 iops : min= 2344, max= 2344, avg=2344.00, stdev= 0.00, samples=1 00:16:11.588 lat (usec) : 250=93.14%, 500=6.79%, 750=0.02%, 1000=0.02% 00:16:11.588 lat (msec) : 2=0.02% 00:16:11.588 cpu : usr=2.20%, sys=6.90%, ctx=4524, majf=0, minf=10 00:16:11.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:11.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.588 issued rwts: total=2048,2472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:11.588 00:16:11.588 Run status group 0 (all jobs): 00:16:11.588 READ: bw=24.5MiB/s (25.7MB/s), 4092KiB/s-8747KiB/s (4190kB/s-8957kB/s), io=24.6MiB (25.7MB), run=1001-1001msec 00:16:11.588 WRITE: bw=28.4MiB/s (29.8MB/s), 4496KiB/s-9.99MiB/s (4603kB/s-10.5MB/s), io=28.5MiB (29.9MB), run=1001-1001msec 00:16:11.588 00:16:11.588 Disk stats (read/write): 00:16:11.588 nvme0n1: ios=883/1024, merge=0/0, ticks=445/396, in_queue=841, util=88.98% 00:16:11.588 nvme0n2: ios=2021/2048, merge=0/0, ticks=454/370, in_queue=824, util=88.95% 00:16:11.588 nvme0n3: ios=838/1024, merge=0/0, ticks=407/395, in_queue=802, util=89.29% 00:16:11.588 nvme0n4: ios=1839/2048, merge=0/0, ticks=416/387, in_queue=803, util=89.75% 00:16:11.588 13:15:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:11.588 [global] 00:16:11.588 thread=1 00:16:11.588 invalidate=1 00:16:11.588 rw=write 00:16:11.588 time_based=1 00:16:11.588 runtime=1 00:16:11.588 ioengine=libaio 00:16:11.588 direct=1 00:16:11.588 bs=4096 00:16:11.588 iodepth=128 00:16:11.588 norandommap=0 00:16:11.588 numjobs=1 00:16:11.588 00:16:11.588 verify_dump=1 00:16:11.588 verify_backlog=512 00:16:11.588 verify_state_save=0 00:16:11.588 do_verify=1 00:16:11.588 verify=crc32c-intel 00:16:11.588 [job0] 00:16:11.588 filename=/dev/nvme0n1 00:16:11.588 [job1] 00:16:11.588 filename=/dev/nvme0n2 00:16:11.588 [job2] 00:16:11.588 filename=/dev/nvme0n3 00:16:11.588 [job3] 00:16:11.588 filename=/dev/nvme0n4 00:16:11.588 Could not set queue depth (nvme0n1) 00:16:11.588 Could not set queue depth (nvme0n2) 00:16:11.588 Could not set queue depth (nvme0n3) 00:16:11.588 Could not set queue depth (nvme0n4) 00:16:11.588 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:11.588 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:11.588 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:11.588 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:11.588 fio-3.35 00:16:11.588 Starting 4 threads 00:16:12.959 00:16:12.959 job0: (groupid=0, jobs=1): err= 0: pid=92650: Mon Jul 15 13:15:09 2024 00:16:12.959 read: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec) 00:16:12.959 slat (usec): min=6, max=18688, avg=266.06, stdev=1453.42 00:16:12.959 clat (usec): min=20906, max=55063, avg=32501.24, stdev=4966.30 00:16:12.959 lat (usec): min=20928, max=55081, avg=32767.30, stdev=5143.24 00:16:12.959 clat percentiles (usec): 00:16:12.959 | 1.00th=[25560], 5.00th=[27919], 10.00th=[29230], 20.00th=[29492], 00:16:12.959 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30802], 60.00th=[31327], 00:16:12.959 | 70.00th=[32375], 80.00th=[35390], 90.00th=[38536], 95.00th=[42730], 00:16:12.959 | 99.00th=[53216], 99.50th=[53216], 99.90th=[55313], 99.95th=[55313], 00:16:12.959 | 99.99th=[55313] 00:16:12.959 write: IOPS=1843, BW=7376KiB/s (7553kB/s)(7420KiB/1006msec); 0 zone resets 00:16:12.959 slat (usec): min=13, max=13922, avg=310.10, stdev=1243.75 00:16:12.959 clat (usec): min=4460, max=66824, avg=41400.68, stdev=12149.77 00:16:12.959 lat (usec): min=5813, max=66844, avg=41710.78, stdev=12219.72 00:16:12.959 clat percentiles (usec): 00:16:12.959 | 1.00th=[13566], 5.00th=[22414], 10.00th=[24511], 20.00th=[25297], 00:16:12.959 | 30.00th=[29230], 40.00th=[44827], 50.00th=[46924], 60.00th=[48497], 00:16:12.959 | 70.00th=[50070], 80.00th=[51119], 90.00th=[53216], 95.00th=[55313], 00:16:12.959 | 99.00th=[60556], 99.50th=[65274], 99.90th=[66847], 99.95th=[66847], 00:16:12.959 | 99.99th=[66847] 00:16:12.959 bw ( KiB/s): min= 5624, max= 8208, per=14.57%, avg=6916.00, stdev=1827.16, samples=2 00:16:12.959 iops : min= 1406, max= 2052, avg=1729.00, stdev=456.79, samples=2 00:16:12.959 lat (msec) : 10=0.27%, 20=0.94%, 50=81.66%, 100=17.13% 00:16:12.959 cpu : usr=2.19%, sys=5.87%, ctx=220, majf=0, minf=7 00:16:12.959 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:16:12.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:12.959 issued rwts: total=1536,1855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.959 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:12.959 job1: (groupid=0, jobs=1): err= 0: pid=92651: Mon Jul 15 13:15:09 2024 00:16:12.959 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:16:12.959 slat (usec): min=9, max=4915, avg=114.91, stdev=597.78 00:16:12.959 clat (usec): min=10535, max=20352, avg=15012.42, stdev=1448.25 00:16:12.959 lat (usec): min=10556, max=20391, avg=15127.33, stdev=1508.61 00:16:12.959 clat percentiles (usec): 00:16:12.959 | 1.00th=[11338], 5.00th=[12649], 10.00th=[13435], 20.00th=[14091], 00:16:12.959 | 30.00th=[14353], 40.00th=[14615], 50.00th=[15008], 60.00th=[15270], 00:16:12.959 | 70.00th=[15664], 80.00th=[15926], 90.00th=[16581], 95.00th=[17433], 00:16:12.959 | 99.00th=[19530], 99.50th=[20055], 99.90th=[20055], 99.95th=[20317], 00:16:12.959 | 99.99th=[20317] 00:16:12.959 write: IOPS=4535, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1003msec); 0 zone resets 00:16:12.959 slat (usec): min=9, max=5062, avg=108.58, stdev=492.08 00:16:12.959 clat (usec): min=623, max=20317, avg=14324.01, stdev=1795.46 00:16:12.959 lat (usec): min=4124, max=20336, avg=14432.59, stdev=1788.14 00:16:12.959 clat percentiles (usec): 00:16:12.959 | 1.00th=[ 8717], 5.00th=[10945], 10.00th=[12125], 20.00th=[13698], 00:16:12.959 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14484], 60.00th=[14746], 00:16:12.959 | 70.00th=[15008], 80.00th=[15401], 90.00th=[15795], 95.00th=[16581], 00:16:12.959 | 99.00th=[18220], 99.50th=[19006], 99.90th=[20055], 99.95th=[20055], 00:16:12.959 | 99.99th=[20317] 00:16:12.960 bw ( KiB/s): min=17464, max=17904, per=37.26%, avg=17684.00, stdev=311.13, samples=2 00:16:12.960 iops : min= 4366, max= 4476, avg=4421.00, stdev=77.78, samples=2 00:16:12.960 lat (usec) : 750=0.01% 00:16:12.960 lat (msec) : 10=1.11%, 20=98.58%, 50=0.30% 00:16:12.960 cpu : usr=4.19%, sys=12.97%, ctx=379, majf=0, minf=6 00:16:12.960 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:12.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:12.960 issued rwts: total=4096,4549,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.960 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:12.960 job2: (groupid=0, jobs=1): err= 0: pid=92652: Mon Jul 15 13:15:09 2024 00:16:12.960 read: IOPS=1292, BW=5172KiB/s (5296kB/s)(5208KiB/1007msec) 00:16:12.960 slat (usec): min=7, max=9517, avg=288.45, stdev=1437.49 00:16:12.960 clat (usec): min=4458, max=45972, avg=35845.72, stdev=4913.68 00:16:12.960 lat (usec): min=13101, max=45988, avg=36134.17, stdev=4749.00 00:16:12.960 clat percentiles (usec): 00:16:12.960 | 1.00th=[13435], 5.00th=[26870], 10.00th=[32900], 20.00th=[35390], 00:16:12.960 | 30.00th=[35914], 40.00th=[35914], 50.00th=[36439], 60.00th=[36963], 00:16:12.960 | 70.00th=[38536], 80.00th=[38536], 90.00th=[39584], 95.00th=[39584], 00:16:12.960 | 99.00th=[42206], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:16:12.960 | 99.99th=[45876] 00:16:12.960 write: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec); 0 zone resets 00:16:12.960 slat (usec): min=18, max=24476, avg=398.26, stdev=2209.83 00:16:12.960 clat (usec): min=24791, max=88401, avg=50384.48, stdev=13305.51 00:16:12.960 lat (usec): min=27121, max=88480, avg=50782.74, stdev=13238.20 00:16:12.960 clat percentiles (usec): 00:16:12.960 | 1.00th=[31589], 5.00th=[32375], 10.00th=[33817], 20.00th=[36963], 00:16:12.960 | 30.00th=[40633], 40.00th=[44827], 50.00th=[51643], 60.00th=[55313], 00:16:12.960 | 70.00th=[58983], 80.00th=[61080], 90.00th=[65799], 95.00th=[78119], 00:16:12.960 | 99.00th=[88605], 99.50th=[88605], 99.90th=[88605], 99.95th=[88605], 00:16:12.960 | 99.99th=[88605] 00:16:12.960 bw ( KiB/s): min= 5402, max= 6896, per=12.96%, avg=6149.00, stdev=1056.42, samples=2 00:16:12.960 iops : min= 1350, max= 1724, avg=1537.00, stdev=264.46, samples=2 00:16:12.960 lat (msec) : 10=0.04%, 20=1.13%, 50=71.32%, 100=27.52% 00:16:12.960 cpu : usr=1.29%, sys=5.17%, ctx=100, majf=0, minf=13 00:16:12.960 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:16:12.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:12.960 issued rwts: total=1302,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.960 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:12.960 job3: (groupid=0, jobs=1): err= 0: pid=92653: Mon Jul 15 13:15:09 2024 00:16:12.960 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:16:12.960 slat (usec): min=6, max=4461, avg=128.35, stdev=626.16 00:16:12.960 clat (usec): min=12566, max=19532, avg=17099.93, stdev=1033.61 00:16:12.960 lat (usec): min=12942, max=22591, avg=17228.28, stdev=866.28 00:16:12.960 clat percentiles (usec): 00:16:12.960 | 1.00th=[13435], 5.00th=[14746], 10.00th=[16188], 20.00th=[16712], 00:16:12.960 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17171], 60.00th=[17433], 00:16:12.960 | 70.00th=[17433], 80.00th=[17695], 90.00th=[18220], 95.00th=[18482], 00:16:12.960 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19530], 99.95th=[19530], 00:16:12.960 | 99.99th=[19530] 00:16:12.960 write: IOPS=3996, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1003msec); 0 zone resets 00:16:12.960 slat (usec): min=12, max=4372, avg=127.10, stdev=581.54 00:16:12.960 clat (usec): min=405, max=20102, avg=16252.99, stdev=2229.89 00:16:12.960 lat (usec): min=3734, max=20128, avg=16380.10, stdev=2219.83 00:16:12.960 clat percentiles (usec): 00:16:12.960 | 1.00th=[ 8586], 5.00th=[13698], 10.00th=[14091], 20.00th=[14484], 00:16:12.960 | 30.00th=[14877], 40.00th=[15270], 50.00th=[16712], 60.00th=[17433], 00:16:12.960 | 70.00th=[17695], 80.00th=[18220], 90.00th=[18744], 95.00th=[19006], 00:16:12.960 | 99.00th=[20055], 99.50th=[20055], 99.90th=[20055], 99.95th=[20055], 00:16:12.960 | 99.99th=[20055] 00:16:12.960 bw ( KiB/s): min=14656, max=16416, per=32.74%, avg=15536.00, stdev=1244.51, samples=2 00:16:12.960 iops : min= 3664, max= 4104, avg=3884.00, stdev=311.13, samples=2 00:16:12.960 lat (usec) : 500=0.01% 00:16:12.960 lat (msec) : 4=0.17%, 10=0.67%, 20=98.83%, 50=0.32% 00:16:12.960 cpu : usr=4.69%, sys=10.58%, ctx=357, majf=0, minf=5 00:16:12.960 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:12.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:12.960 issued rwts: total=3584,4008,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.960 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:12.960 00:16:12.960 Run status group 0 (all jobs): 00:16:12.960 READ: bw=40.8MiB/s (42.8MB/s), 5172KiB/s-16.0MiB/s (5296kB/s-16.7MB/s), io=41.1MiB (43.1MB), run=1003-1007msec 00:16:12.960 WRITE: bw=46.3MiB/s (48.6MB/s), 6101KiB/s-17.7MiB/s (6248kB/s-18.6MB/s), io=46.7MiB (48.9MB), run=1003-1007msec 00:16:12.960 00:16:12.960 Disk stats (read/write): 00:16:12.960 nvme0n1: ios=1387/1536, merge=0/0, ticks=21505/30317, in_queue=51822, util=86.93% 00:16:12.960 nvme0n2: ios=3624/3756, merge=0/0, ticks=16309/15708, in_queue=32017, util=87.96% 00:16:12.960 nvme0n3: ios=1024/1313, merge=0/0, ticks=9116/16414, in_queue=25530, util=88.95% 00:16:12.960 nvme0n4: ios=3072/3374, merge=0/0, ticks=12269/12372, in_queue=24641, util=89.51% 00:16:12.960 13:15:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:12.960 [global] 00:16:12.960 thread=1 00:16:12.960 invalidate=1 00:16:12.960 rw=randwrite 00:16:12.960 time_based=1 00:16:12.960 runtime=1 00:16:12.960 ioengine=libaio 00:16:12.960 direct=1 00:16:12.960 bs=4096 00:16:12.960 iodepth=128 00:16:12.960 norandommap=0 00:16:12.960 numjobs=1 00:16:12.960 00:16:12.960 verify_dump=1 00:16:12.960 verify_backlog=512 00:16:12.960 verify_state_save=0 00:16:12.960 do_verify=1 00:16:12.960 verify=crc32c-intel 00:16:12.960 [job0] 00:16:12.960 filename=/dev/nvme0n1 00:16:12.960 [job1] 00:16:12.960 filename=/dev/nvme0n2 00:16:12.960 [job2] 00:16:12.960 filename=/dev/nvme0n3 00:16:12.960 [job3] 00:16:12.960 filename=/dev/nvme0n4 00:16:12.960 Could not set queue depth (nvme0n1) 00:16:12.960 Could not set queue depth (nvme0n2) 00:16:12.960 Could not set queue depth (nvme0n3) 00:16:12.960 Could not set queue depth (nvme0n4) 00:16:12.960 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:12.960 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:12.960 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:12.960 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:12.960 fio-3.35 00:16:12.960 Starting 4 threads 00:16:14.346 00:16:14.346 job0: (groupid=0, jobs=1): err= 0: pid=92706: Mon Jul 15 13:15:10 2024 00:16:14.346 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:16:14.346 slat (usec): min=3, max=11368, avg=237.07, stdev=1002.79 00:16:14.346 clat (usec): min=12786, max=43120, avg=30327.20, stdev=7213.07 00:16:14.346 lat (usec): min=13963, max=43192, avg=30564.28, stdev=7240.00 00:16:14.346 clat percentiles (usec): 00:16:14.347 | 1.00th=[13960], 5.00th=[16450], 10.00th=[16909], 20.00th=[25035], 00:16:14.347 | 30.00th=[28705], 40.00th=[31065], 50.00th=[33162], 60.00th=[33817], 00:16:14.347 | 70.00th=[34866], 80.00th=[35390], 90.00th=[37487], 95.00th=[39060], 00:16:14.347 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:16:14.347 | 99.99th=[43254] 00:16:14.347 write: IOPS=2343, BW=9376KiB/s (9601kB/s)(9460KiB/1009msec); 0 zone resets 00:16:14.347 slat (usec): min=11, max=8266, avg=209.99, stdev=835.06 00:16:14.347 clat (usec): min=5758, max=46451, avg=27558.73, stdev=7307.71 00:16:14.347 lat (usec): min=9269, max=46469, avg=27768.72, stdev=7314.57 00:16:14.347 clat percentiles (usec): 00:16:14.347 | 1.00th=[12125], 5.00th=[14091], 10.00th=[14877], 20.00th=[17957], 00:16:14.347 | 30.00th=[26346], 40.00th=[28443], 50.00th=[30016], 60.00th=[31327], 00:16:14.347 | 70.00th=[31851], 80.00th=[33162], 90.00th=[34866], 95.00th=[35914], 00:16:14.347 | 99.00th=[40633], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:16:14.347 | 99.99th=[46400] 00:16:14.347 bw ( KiB/s): min= 7216, max=10658, per=18.74%, avg=8937.00, stdev=2433.86, samples=2 00:16:14.347 iops : min= 1804, max= 2664, avg=2234.00, stdev=608.11, samples=2 00:16:14.347 lat (msec) : 10=0.14%, 20=20.10%, 50=79.76% 00:16:14.347 cpu : usr=1.59%, sys=7.74%, ctx=673, majf=0, minf=8 00:16:14.347 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:16:14.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:14.347 issued rwts: total=2048,2365,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:14.347 job1: (groupid=0, jobs=1): err= 0: pid=92707: Mon Jul 15 13:15:10 2024 00:16:14.347 read: IOPS=3086, BW=12.1MiB/s (12.6MB/s)(12.2MiB/1008msec) 00:16:14.347 slat (usec): min=9, max=8606, avg=154.31, stdev=750.83 00:16:14.347 clat (usec): min=6888, max=30705, avg=18739.08, stdev=3094.12 00:16:14.347 lat (usec): min=7932, max=30719, avg=18893.39, stdev=3151.77 00:16:14.347 clat percentiles (usec): 00:16:14.347 | 1.00th=[11600], 5.00th=[13566], 10.00th=[14353], 20.00th=[17171], 00:16:14.347 | 30.00th=[18220], 40.00th=[18482], 50.00th=[18744], 60.00th=[18744], 00:16:14.347 | 70.00th=[19268], 80.00th=[19792], 90.00th=[23200], 95.00th=[24249], 00:16:14.347 | 99.00th=[26084], 99.50th=[26346], 99.90th=[27395], 99.95th=[30802], 00:16:14.347 | 99.99th=[30802] 00:16:14.347 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:16:14.347 slat (usec): min=9, max=10267, avg=136.66, stdev=523.18 00:16:14.347 clat (usec): min=11053, max=32263, avg=19215.46, stdev=2542.24 00:16:14.347 lat (usec): min=11079, max=32291, avg=19352.12, stdev=2586.03 00:16:14.347 clat percentiles (usec): 00:16:14.347 | 1.00th=[12256], 5.00th=[14877], 10.00th=[16581], 20.00th=[17957], 00:16:14.347 | 30.00th=[18220], 40.00th=[18744], 50.00th=[19006], 60.00th=[19268], 00:16:14.347 | 70.00th=[19268], 80.00th=[21103], 90.00th=[22676], 95.00th=[23725], 00:16:14.347 | 99.00th=[26084], 99.50th=[26608], 99.90th=[31065], 99.95th=[32113], 00:16:14.347 | 99.99th=[32375] 00:16:14.347 bw ( KiB/s): min=13112, max=14856, per=29.33%, avg=13984.00, stdev=1233.19, samples=2 00:16:14.347 iops : min= 3278, max= 3714, avg=3496.00, stdev=308.30, samples=2 00:16:14.347 lat (msec) : 10=0.25%, 20=77.46%, 50=22.29% 00:16:14.347 cpu : usr=3.97%, sys=10.53%, ctx=502, majf=0, minf=7 00:16:14.347 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:14.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:14.347 issued rwts: total=3111,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:14.347 job2: (groupid=0, jobs=1): err= 0: pid=92708: Mon Jul 15 13:15:10 2024 00:16:14.347 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:16:14.347 slat (usec): min=3, max=10410, avg=252.20, stdev=1002.93 00:16:14.347 clat (usec): min=15977, max=43859, avg=32195.34, stdev=5055.59 00:16:14.347 lat (usec): min=16791, max=43880, avg=32447.53, stdev=5025.62 00:16:14.347 clat percentiles (usec): 00:16:14.347 | 1.00th=[16909], 5.00th=[18220], 10.00th=[25297], 20.00th=[30278], 00:16:14.347 | 30.00th=[31589], 40.00th=[32637], 50.00th=[33424], 60.00th=[33817], 00:16:14.347 | 70.00th=[34341], 80.00th=[35390], 90.00th=[36963], 95.00th=[38011], 00:16:14.347 | 99.00th=[40633], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:16:14.347 | 99.99th=[43779] 00:16:14.347 write: IOPS=2171, BW=8687KiB/s (8896kB/s)(8748KiB/1007msec); 0 zone resets 00:16:14.347 slat (usec): min=8, max=11113, avg=212.77, stdev=894.39 00:16:14.347 clat (usec): min=6959, max=39403, avg=27923.30, stdev=5701.52 00:16:14.347 lat (usec): min=6980, max=39455, avg=28136.06, stdev=5692.07 00:16:14.347 clat percentiles (usec): 00:16:14.347 | 1.00th=[13566], 5.00th=[17957], 10.00th=[18482], 20.00th=[22938], 00:16:14.347 | 30.00th=[24773], 40.00th=[27919], 50.00th=[30016], 60.00th=[30802], 00:16:14.347 | 70.00th=[31851], 80.00th=[32637], 90.00th=[33817], 95.00th=[34866], 00:16:14.347 | 99.00th=[38011], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:16:14.347 | 99.99th=[39584] 00:16:14.347 bw ( KiB/s): min= 7352, max= 9146, per=17.30%, avg=8249.00, stdev=1268.55, samples=2 00:16:14.347 iops : min= 1838, max= 2286, avg=2062.00, stdev=316.78, samples=2 00:16:14.347 lat (msec) : 10=0.33%, 20=9.96%, 50=89.70% 00:16:14.347 cpu : usr=1.89%, sys=6.56%, ctx=678, majf=0, minf=13 00:16:14.347 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:14.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:14.347 issued rwts: total=2048,2187,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:14.347 job3: (groupid=0, jobs=1): err= 0: pid=92709: Mon Jul 15 13:15:10 2024 00:16:14.347 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:16:14.347 slat (usec): min=6, max=8516, avg=137.07, stdev=660.06 00:16:14.347 clat (usec): min=10376, max=25157, avg=17046.29, stdev=2441.22 00:16:14.347 lat (usec): min=10391, max=25183, avg=17183.36, stdev=2489.85 00:16:14.347 clat percentiles (usec): 00:16:14.347 | 1.00th=[11338], 5.00th=[12649], 10.00th=[13566], 20.00th=[15795], 00:16:14.347 | 30.00th=[16188], 40.00th=[16319], 50.00th=[16909], 60.00th=[17171], 00:16:14.347 | 70.00th=[17695], 80.00th=[18744], 90.00th=[20579], 95.00th=[21627], 00:16:14.347 | 99.00th=[23200], 99.50th=[23462], 99.90th=[25035], 99.95th=[25035], 00:16:14.347 | 99.99th=[25035] 00:16:14.347 write: IOPS=3880, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1003msec); 0 zone resets 00:16:14.347 slat (usec): min=11, max=8649, avg=122.38, stdev=461.11 00:16:14.347 clat (usec): min=542, max=26287, avg=16844.11, stdev=2430.62 00:16:14.347 lat (usec): min=6947, max=26371, avg=16966.49, stdev=2450.28 00:16:14.347 clat percentiles (usec): 00:16:14.347 | 1.00th=[ 7898], 5.00th=[12387], 10.00th=[14615], 20.00th=[15926], 00:16:14.347 | 30.00th=[16319], 40.00th=[16581], 50.00th=[16909], 60.00th=[17171], 00:16:14.347 | 70.00th=[17433], 80.00th=[17957], 90.00th=[19268], 95.00th=[21103], 00:16:14.347 | 99.00th=[23725], 99.50th=[24249], 99.90th=[25035], 99.95th=[26084], 00:16:14.347 | 99.99th=[26346] 00:16:14.347 bw ( KiB/s): min=13744, max=16368, per=31.58%, avg=15056.00, stdev=1855.45, samples=2 00:16:14.347 iops : min= 3436, max= 4092, avg=3764.00, stdev=463.86, samples=2 00:16:14.347 lat (usec) : 750=0.01% 00:16:14.347 lat (msec) : 10=0.90%, 20=89.29%, 50=9.80% 00:16:14.347 cpu : usr=3.69%, sys=12.08%, ctx=583, majf=0, minf=11 00:16:14.347 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:14.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:14.347 issued rwts: total=3584,3892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:14.347 00:16:14.347 Run status group 0 (all jobs): 00:16:14.347 READ: bw=41.8MiB/s (43.8MB/s), 8119KiB/s-14.0MiB/s (8314kB/s-14.6MB/s), io=42.2MiB (44.2MB), run=1003-1009msec 00:16:14.347 WRITE: bw=46.6MiB/s (48.8MB/s), 8687KiB/s-15.2MiB/s (8896kB/s-15.9MB/s), io=47.0MiB (49.3MB), run=1003-1009msec 00:16:14.347 00:16:14.347 Disk stats (read/write): 00:16:14.347 nvme0n1: ios=1776/2048, merge=0/0, ticks=12426/12504, in_queue=24930, util=87.68% 00:16:14.347 nvme0n2: ios=2598/3023, merge=0/0, ticks=23950/26972, in_queue=50922, util=88.41% 00:16:14.347 nvme0n3: ios=1565/2048, merge=0/0, ticks=11798/13244, in_queue=25042, util=87.97% 00:16:14.347 nvme0n4: ios=3072/3247, merge=0/0, ticks=25496/25055, in_queue=50551, util=89.55% 00:16:14.347 13:15:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:14.347 13:15:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=92722 00:16:14.347 13:15:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:14.347 13:15:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:14.347 [global] 00:16:14.347 thread=1 00:16:14.347 invalidate=1 00:16:14.347 rw=read 00:16:14.347 time_based=1 00:16:14.347 runtime=10 00:16:14.347 ioengine=libaio 00:16:14.347 direct=1 00:16:14.347 bs=4096 00:16:14.347 iodepth=1 00:16:14.347 norandommap=1 00:16:14.347 numjobs=1 00:16:14.347 00:16:14.347 [job0] 00:16:14.347 filename=/dev/nvme0n1 00:16:14.347 [job1] 00:16:14.347 filename=/dev/nvme0n2 00:16:14.347 [job2] 00:16:14.347 filename=/dev/nvme0n3 00:16:14.347 [job3] 00:16:14.347 filename=/dev/nvme0n4 00:16:14.347 Could not set queue depth (nvme0n1) 00:16:14.347 Could not set queue depth (nvme0n2) 00:16:14.347 Could not set queue depth (nvme0n3) 00:16:14.347 Could not set queue depth (nvme0n4) 00:16:14.347 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:14.347 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:14.348 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:14.348 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:14.348 fio-3.35 00:16:14.348 Starting 4 threads 00:16:17.624 13:15:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:17.624 fio: pid=92770, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:17.624 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=59179008, buflen=4096 00:16:17.624 13:15:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:17.624 fio: pid=92769, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:17.624 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=35917824, buflen=4096 00:16:17.624 13:15:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:17.624 13:15:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:17.882 fio: pid=92767, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:17.882 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=39317504, buflen=4096 00:16:17.882 13:15:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:17.882 13:15:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:18.140 fio: pid=92768, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:18.140 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=6225920, buflen=4096 00:16:18.140 00:16:18.140 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=92767: Mon Jul 15 13:15:14 2024 00:16:18.140 read: IOPS=2774, BW=10.8MiB/s (11.4MB/s)(37.5MiB/3460msec) 00:16:18.140 slat (usec): min=9, max=13257, avg=20.23, stdev=210.41 00:16:18.140 clat (usec): min=159, max=4402, avg=338.35, stdev=110.34 00:16:18.140 lat (usec): min=177, max=13861, avg=358.58, stdev=239.42 00:16:18.140 clat percentiles (usec): 00:16:18.140 | 1.00th=[ 198], 5.00th=[ 235], 10.00th=[ 265], 20.00th=[ 273], 00:16:18.140 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 322], 00:16:18.140 | 70.00th=[ 404], 80.00th=[ 424], 90.00th=[ 449], 95.00th=[ 469], 00:16:18.140 | 99.00th=[ 562], 99.50th=[ 644], 99.90th=[ 1287], 99.95th=[ 2147], 00:16:18.140 | 99.99th=[ 4424] 00:16:18.140 bw ( KiB/s): min= 8784, max=13296, per=20.23%, avg=11017.33, stdev=2172.79, samples=6 00:16:18.140 iops : min= 2196, max= 3324, avg=2754.33, stdev=543.20, samples=6 00:16:18.140 lat (usec) : 250=7.54%, 500=90.28%, 750=1.91%, 1000=0.12% 00:16:18.140 lat (msec) : 2=0.08%, 4=0.04%, 10=0.01% 00:16:18.140 cpu : usr=1.04%, sys=3.70%, ctx=9613, majf=0, minf=1 00:16:18.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.140 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.140 issued rwts: total=9600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.140 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=92768: Mon Jul 15 13:15:14 2024 00:16:18.140 read: IOPS=4806, BW=18.8MiB/s (19.7MB/s)(69.9MiB/3725msec) 00:16:18.140 slat (usec): min=11, max=14209, avg=18.04, stdev=183.71 00:16:18.140 clat (usec): min=3, max=2210, avg=188.49, stdev=43.79 00:16:18.140 lat (usec): min=152, max=14501, avg=206.52, stdev=190.09 00:16:18.140 clat percentiles (usec): 00:16:18.140 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:16:18.140 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 188], 00:16:18.140 | 70.00th=[ 204], 80.00th=[ 221], 90.00th=[ 237], 95.00th=[ 247], 00:16:18.140 | 99.00th=[ 273], 99.50th=[ 297], 99.90th=[ 494], 99.95th=[ 709], 00:16:18.140 | 99.99th=[ 2024] 00:16:18.140 bw ( KiB/s): min=16480, max=21824, per=35.30%, avg=19224.43, stdev=2393.03, samples=7 00:16:18.140 iops : min= 4120, max= 5456, avg=4806.00, stdev=598.36, samples=7 00:16:18.140 lat (usec) : 4=0.01%, 100=0.01%, 250=95.79%, 500=4.10%, 750=0.05% 00:16:18.140 lat (usec) : 1000=0.01% 00:16:18.140 lat (msec) : 2=0.02%, 4=0.01% 00:16:18.140 cpu : usr=1.42%, sys=6.04%, ctx=17930, majf=0, minf=1 00:16:18.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.140 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.140 issued rwts: total=17905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.140 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=92769: Mon Jul 15 13:15:14 2024 00:16:18.140 read: IOPS=2722, BW=10.6MiB/s (11.2MB/s)(34.3MiB/3221msec) 00:16:18.140 slat (usec): min=9, max=11697, avg=17.49, stdev=157.46 00:16:18.140 clat (usec): min=197, max=3672, avg=347.79, stdev=97.39 00:16:18.140 lat (usec): min=211, max=12019, avg=365.28, stdev=185.67 00:16:18.140 clat percentiles (usec): 00:16:18.140 | 1.00th=[ 260], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 281], 00:16:18.140 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 347], 00:16:18.140 | 70.00th=[ 408], 80.00th=[ 429], 90.00th=[ 453], 95.00th=[ 474], 00:16:18.140 | 99.00th=[ 553], 99.50th=[ 603], 99.90th=[ 1090], 99.95th=[ 1401], 00:16:18.140 | 99.99th=[ 3687] 00:16:18.140 bw ( KiB/s): min= 8784, max=13304, per=20.40%, avg=11110.67, stdev=2087.99, samples=6 00:16:18.140 iops : min= 2196, max= 3326, avg=2777.67, stdev=522.00, samples=6 00:16:18.140 lat (usec) : 250=0.62%, 500=97.21%, 750=1.86%, 1000=0.19% 00:16:18.140 lat (msec) : 2=0.08%, 4=0.03% 00:16:18.140 cpu : usr=0.96%, sys=3.60%, ctx=8781, majf=0, minf=1 00:16:18.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.140 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.140 issued rwts: total=8770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.140 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=92770: Mon Jul 15 13:15:14 2024 00:16:18.140 read: IOPS=4861, BW=19.0MiB/s (19.9MB/s)(56.4MiB/2972msec) 00:16:18.140 slat (nsec): min=13316, max=80433, avg=17093.01, stdev=3972.39 00:16:18.140 clat (usec): min=145, max=761, avg=186.86, stdev=30.10 00:16:18.140 lat (usec): min=159, max=777, avg=203.95, stdev=31.23 00:16:18.140 clat percentiles (usec): 00:16:18.140 | 1.00th=[ 155], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:16:18.140 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 182], 00:16:18.140 | 70.00th=[ 198], 80.00th=[ 217], 90.00th=[ 235], 95.00th=[ 245], 00:16:18.140 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 310], 99.95th=[ 330], 00:16:18.140 | 99.99th=[ 482] 00:16:18.140 bw ( KiB/s): min=16504, max=21648, per=36.36%, avg=19801.60, stdev=2511.61, samples=5 00:16:18.140 iops : min= 4126, max= 5412, avg=4950.40, stdev=627.90, samples=5 00:16:18.140 lat (usec) : 250=96.68%, 500=3.30%, 1000=0.01% 00:16:18.140 cpu : usr=1.51%, sys=6.77%, ctx=14449, majf=0, minf=1 00:16:18.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.140 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.140 issued rwts: total=14449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.140 00:16:18.140 Run status group 0 (all jobs): 00:16:18.140 READ: bw=53.2MiB/s (55.8MB/s), 10.6MiB/s-19.0MiB/s (11.2MB/s-19.9MB/s), io=198MiB (208MB), run=2972-3725msec 00:16:18.140 00:16:18.140 Disk stats (read/write): 00:16:18.141 nvme0n1: ios=9263/0, merge=0/0, ticks=3134/0, in_queue=3134, util=95.17% 00:16:18.141 nvme0n2: ios=17319/0, merge=0/0, ticks=3355/0, in_queue=3355, util=95.29% 00:16:18.141 nvme0n3: ios=8547/0, merge=0/0, ticks=2944/0, in_queue=2944, util=96.15% 00:16:18.141 nvme0n4: ios=14041/0, merge=0/0, ticks=2663/0, in_queue=2663, util=96.70% 00:16:18.141 13:15:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:18.141 13:15:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:18.706 13:15:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:18.706 13:15:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:18.706 13:15:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:18.706 13:15:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:19.271 13:15:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:19.271 13:15:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:19.271 13:15:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:19.271 13:15:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:19.838 13:15:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:19.838 13:15:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 92722 00:16:19.838 13:15:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:19.838 13:15:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:19.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.838 13:15:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:19.838 13:15:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:16:19.838 13:15:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.838 13:15:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:19.838 13:15:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:19.838 13:15:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.838 13:15:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:16:19.838 nvmf hotplug test: fio failed as expected 00:16:19.838 13:15:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:19.838 13:15:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:19.838 13:15:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:20.096 13:15:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:20.096 13:15:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:20.096 13:15:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:20.096 13:15:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:20.096 13:15:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:20.096 13:15:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:20.096 13:15:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:20.096 13:15:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:20.096 13:15:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:20.096 13:15:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:20.096 13:15:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:20.096 rmmod nvme_tcp 00:16:20.096 rmmod nvme_fabrics 00:16:20.096 rmmod nvme_keyring 00:16:20.096 13:15:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:20.096 13:15:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:20.096 13:15:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:20.096 13:15:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 92232 ']' 00:16:20.096 13:15:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 92232 00:16:20.096 13:15:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 92232 ']' 00:16:20.096 13:15:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 92232 00:16:20.096 13:15:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:16:20.096 13:15:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:20.096 13:15:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92232 00:16:20.097 killing process with pid 92232 00:16:20.097 13:15:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:20.097 13:15:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:20.097 13:15:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92232' 00:16:20.097 13:15:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 92232 00:16:20.097 13:15:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 92232 00:16:20.355 13:15:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:20.355 13:15:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:20.355 13:15:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:20.355 13:15:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:20.355 13:15:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:20.355 13:15:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.355 13:15:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.355 13:15:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.355 13:15:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:20.355 00:16:20.355 real 0m19.804s 00:16:20.355 user 1m16.420s 00:16:20.355 sys 0m8.532s 00:16:20.355 13:15:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:20.355 13:15:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.355 ************************************ 00:16:20.355 END TEST nvmf_fio_target 00:16:20.355 ************************************ 00:16:20.355 13:15:16 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:20.355 13:15:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:20.355 13:15:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:20.355 13:15:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:20.355 ************************************ 00:16:20.356 START TEST nvmf_bdevio 00:16:20.356 ************************************ 00:16:20.356 13:15:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:20.356 * Looking for test storage... 00:16:20.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.356 13:15:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:20.614 Cannot find device "nvmf_tgt_br" 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:20.614 Cannot find device "nvmf_tgt_br2" 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:20.614 Cannot find device "nvmf_tgt_br" 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:20.614 Cannot find device "nvmf_tgt_br2" 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:20.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:20.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:20.614 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:20.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:16:20.872 00:16:20.872 --- 10.0.0.2 ping statistics --- 00:16:20.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.872 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:20.872 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:20.872 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:16:20.872 00:16:20.872 --- 10.0.0.3 ping statistics --- 00:16:20.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.872 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:20.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:20.872 00:16:20.872 --- 10.0.0.1 ping statistics --- 00:16:20.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.872 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=93096 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 93096 00:16:20.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 93096 ']' 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:20.872 13:15:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:20.872 [2024-07-15 13:15:17.485061] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:20.872 [2024-07-15 13:15:17.485158] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.131 [2024-07-15 13:15:17.624426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:21.131 [2024-07-15 13:15:17.708211] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.131 [2024-07-15 13:15:17.708788] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.131 [2024-07-15 13:15:17.709162] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.131 [2024-07-15 13:15:17.709604] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.131 [2024-07-15 13:15:17.709807] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.131 [2024-07-15 13:15:17.710264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:21.131 [2024-07-15 13:15:17.710517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:21.131 [2024-07-15 13:15:17.710515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:21.131 [2024-07-15 13:15:17.710411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:21.697 13:15:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:21.697 13:15:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:16:21.697 13:15:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:21.697 13:15:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:21.697 13:15:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:21.954 [2024-07-15 13:15:18.475726] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:21.954 Malloc0 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:21.954 [2024-07-15 13:15:18.543807] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:21.954 13:15:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:21.955 13:15:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:21.955 13:15:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:21.955 13:15:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:21.955 13:15:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:21.955 { 00:16:21.955 "params": { 00:16:21.955 "name": "Nvme$subsystem", 00:16:21.955 "trtype": "$TEST_TRANSPORT", 00:16:21.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:21.955 "adrfam": "ipv4", 00:16:21.955 "trsvcid": "$NVMF_PORT", 00:16:21.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:21.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:21.955 "hdgst": ${hdgst:-false}, 00:16:21.955 "ddgst": ${ddgst:-false} 00:16:21.955 }, 00:16:21.955 "method": "bdev_nvme_attach_controller" 00:16:21.955 } 00:16:21.955 EOF 00:16:21.955 )") 00:16:21.955 13:15:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:21.955 13:15:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:21.955 13:15:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:21.955 13:15:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:21.955 "params": { 00:16:21.955 "name": "Nvme1", 00:16:21.955 "trtype": "tcp", 00:16:21.955 "traddr": "10.0.0.2", 00:16:21.955 "adrfam": "ipv4", 00:16:21.955 "trsvcid": "4420", 00:16:21.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:21.955 "hdgst": false, 00:16:21.955 "ddgst": false 00:16:21.955 }, 00:16:21.955 "method": "bdev_nvme_attach_controller" 00:16:21.955 }' 00:16:21.955 [2024-07-15 13:15:18.607708] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:21.955 [2024-07-15 13:15:18.607811] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93150 ] 00:16:22.212 [2024-07-15 13:15:18.752661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:22.212 [2024-07-15 13:15:18.834052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.212 [2024-07-15 13:15:18.834179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.212 [2024-07-15 13:15:18.834183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.469 I/O targets: 00:16:22.469 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:22.469 00:16:22.469 00:16:22.469 CUnit - A unit testing framework for C - Version 2.1-3 00:16:22.469 http://cunit.sourceforge.net/ 00:16:22.469 00:16:22.469 00:16:22.469 Suite: bdevio tests on: Nvme1n1 00:16:22.469 Test: blockdev write read block ...passed 00:16:22.469 Test: blockdev write zeroes read block ...passed 00:16:22.469 Test: blockdev write zeroes read no split ...passed 00:16:22.469 Test: blockdev write zeroes read split ...passed 00:16:22.470 Test: blockdev write zeroes read split partial ...passed 00:16:22.470 Test: blockdev reset ...[2024-07-15 13:15:19.125639] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:22.470 [2024-07-15 13:15:19.125751] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc040d0 (9): Bad file descriptor 00:16:22.470 passed 00:16:22.470 Test: blockdev write read 8 blocks ...[2024-07-15 13:15:19.140320] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:22.470 passed 00:16:22.470 Test: blockdev write read size > 128k ...passed 00:16:22.470 Test: blockdev write read invalid size ...passed 00:16:22.470 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:22.470 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:22.470 Test: blockdev write read max offset ...passed 00:16:22.727 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:22.727 Test: blockdev writev readv 8 blocks ...passed 00:16:22.728 Test: blockdev writev readv 30 x 1block ...passed 00:16:22.728 Test: blockdev writev readv block ...passed 00:16:22.728 Test: blockdev writev readv size > 128k ...passed 00:16:22.728 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:22.728 Test: blockdev comparev and writev ...[2024-07-15 13:15:19.311665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.728 [2024-07-15 13:15:19.311714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:22.728 [2024-07-15 13:15:19.311736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.728 [2024-07-15 13:15:19.311748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:22.728 [2024-07-15 13:15:19.312028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.728 [2024-07-15 13:15:19.312046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:22.728 [2024-07-15 13:15:19.312062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.728 [2024-07-15 13:15:19.312073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:22.728 [2024-07-15 13:15:19.312369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.728 [2024-07-15 13:15:19.312389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:22.728 [2024-07-15 13:15:19.312405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.728 [2024-07-15 13:15:19.312415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:22.728 [2024-07-15 13:15:19.312843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.728 [2024-07-15 13:15:19.312866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:22.728 [2024-07-15 13:15:19.312882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.728 [2024-07-15 13:15:19.312892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:22.728 passed 00:16:22.728 Test: blockdev nvme passthru rw ...passed 00:16:22.728 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:15:19.395728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:22.728 [2024-07-15 13:15:19.395764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:22.728 [2024-07-15 13:15:19.395895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:22.728 [2024-07-15 13:15:19.395912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:22.728 [2024-07-15 13:15:19.396021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:22.728 [2024-07-15 13:15:19.396037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:22.728 passed 00:16:22.728 Test: blockdev nvme admin passthru ...[2024-07-15 13:15:19.396150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:22.728 [2024-07-15 13:15:19.396165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:22.728 passed 00:16:22.728 Test: blockdev copy ...passed 00:16:22.728 00:16:22.728 Run Summary: Type Total Ran Passed Failed Inactive 00:16:22.728 suites 1 1 n/a 0 0 00:16:22.728 tests 23 23 23 0 0 00:16:22.728 asserts 152 152 152 0 n/a 00:16:22.728 00:16:22.728 Elapsed time = 0.892 seconds 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:23.050 rmmod nvme_tcp 00:16:23.050 rmmod nvme_fabrics 00:16:23.050 rmmod nvme_keyring 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 93096 ']' 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 93096 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 93096 ']' 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 93096 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:23.050 13:15:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93096 00:16:23.307 killing process with pid 93096 00:16:23.307 13:15:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:16:23.307 13:15:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:16:23.307 13:15:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93096' 00:16:23.307 13:15:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 93096 00:16:23.307 13:15:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 93096 00:16:23.307 13:15:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:23.307 13:15:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:23.307 13:15:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:23.307 13:15:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:23.307 13:15:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:23.307 13:15:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.307 13:15:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.307 13:15:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.307 13:15:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:23.565 00:16:23.565 real 0m3.051s 00:16:23.565 user 0m11.117s 00:16:23.565 sys 0m0.785s 00:16:23.565 13:15:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:23.565 13:15:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:23.565 ************************************ 00:16:23.565 END TEST nvmf_bdevio 00:16:23.565 ************************************ 00:16:23.565 13:15:20 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:23.565 13:15:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:23.565 13:15:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:23.565 13:15:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:23.565 ************************************ 00:16:23.565 START TEST nvmf_auth_target 00:16:23.565 ************************************ 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:23.565 * Looking for test storage... 00:16:23.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:23.565 13:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:23.566 Cannot find device "nvmf_tgt_br" 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.566 Cannot find device "nvmf_tgt_br2" 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:23.566 Cannot find device "nvmf_tgt_br" 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:23.566 Cannot find device "nvmf_tgt_br2" 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:23.566 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.824 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.824 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:23.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:16:23.824 00:16:23.824 --- 10.0.0.2 ping statistics --- 00:16:23.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.824 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:23.824 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:23.824 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:16:23.824 00:16:23.824 --- 10.0.0.3 ping statistics --- 00:16:23.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.824 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:23.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:16:23.824 00:16:23.824 --- 10.0.0.1 ping statistics --- 00:16:23.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.824 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=93332 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 93332 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 93332 ']' 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:23.824 13:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=93376 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7f5c6249898b5b7a2ea4e73de2c61f6f42199586f300afe2 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.DoG 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7f5c6249898b5b7a2ea4e73de2c61f6f42199586f300afe2 0 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7f5c6249898b5b7a2ea4e73de2c61f6f42199586f300afe2 0 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7f5c6249898b5b7a2ea4e73de2c61f6f42199586f300afe2 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.DoG 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.DoG 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.DoG 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5cbcf234d43a178a39fc1d44a8786744ca7a2d41e9402f727a67bc7eec2b1504 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.iWT 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5cbcf234d43a178a39fc1d44a8786744ca7a2d41e9402f727a67bc7eec2b1504 3 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5cbcf234d43a178a39fc1d44a8786744ca7a2d41e9402f727a67bc7eec2b1504 3 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5cbcf234d43a178a39fc1d44a8786744ca7a2d41e9402f727a67bc7eec2b1504 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.iWT 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.iWT 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.iWT 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a624552ce47518c2620fe78a77cb696a 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.DKk 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a624552ce47518c2620fe78a77cb696a 1 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a624552ce47518c2620fe78a77cb696a 1 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a624552ce47518c2620fe78a77cb696a 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.DKk 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.DKk 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.DKk 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f505c2b320b04272dd436c20758dc3af9fe1ff677bf34b09 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.gap 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f505c2b320b04272dd436c20758dc3af9fe1ff677bf34b09 2 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f505c2b320b04272dd436c20758dc3af9fe1ff677bf34b09 2 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f505c2b320b04272dd436c20758dc3af9fe1ff677bf34b09 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.gap 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.gap 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.gap 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9878aec554b1b2694e8943f1f1fdf5fb12bd07b3e19fc44f 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Tkw 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9878aec554b1b2694e8943f1f1fdf5fb12bd07b3e19fc44f 2 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9878aec554b1b2694e8943f1f1fdf5fb12bd07b3e19fc44f 2 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:25.196 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:25.197 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9878aec554b1b2694e8943f1f1fdf5fb12bd07b3e19fc44f 00:16:25.197 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:25.197 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:25.456 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Tkw 00:16:25.456 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Tkw 00:16:25.456 13:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Tkw 00:16:25.456 13:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:25.456 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:25.456 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.456 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:25.456 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:25.456 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:25.456 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:25.456 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e61656643a15fa95441f1fec6586088b 00:16:25.456 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:25.456 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.9Q6 00:16:25.456 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e61656643a15fa95441f1fec6586088b 1 00:16:25.456 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e61656643a15fa95441f1fec6586088b 1 00:16:25.456 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:25.456 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:25.456 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e61656643a15fa95441f1fec6586088b 00:16:25.456 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:25.456 13:15:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.9Q6 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.9Q6 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.9Q6 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ead6bf8c2208d294bed1ab86f182f20e56c7c4959112a99435dd557f94bb64f5 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Tin 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ead6bf8c2208d294bed1ab86f182f20e56c7c4959112a99435dd557f94bb64f5 3 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ead6bf8c2208d294bed1ab86f182f20e56c7c4959112a99435dd557f94bb64f5 3 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ead6bf8c2208d294bed1ab86f182f20e56c7c4959112a99435dd557f94bb64f5 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Tin 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Tin 00:16:25.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Tin 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 93332 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 93332 ']' 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:25.456 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.714 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:25.714 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:16:25.714 13:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 93376 /var/tmp/host.sock 00:16:25.714 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 93376 ']' 00:16:25.714 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:16:25.714 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:25.714 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:25.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:25.714 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:25.714 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.972 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:25.972 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:16:25.972 13:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:25.972 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.972 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.972 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.972 13:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:25.972 13:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DoG 00:16:25.972 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.972 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.972 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.972 13:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.DoG 00:16:25.972 13:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.DoG 00:16:26.537 13:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.iWT ]] 00:16:26.537 13:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iWT 00:16:26.537 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.537 13:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.537 13:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.537 13:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iWT 00:16:26.537 13:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iWT 00:16:26.795 13:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:26.795 13:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.DKk 00:16:26.795 13:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.795 13:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.795 13:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.795 13:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.DKk 00:16:26.795 13:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.DKk 00:16:27.053 13:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.gap ]] 00:16:27.053 13:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gap 00:16:27.053 13:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.053 13:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.053 13:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.053 13:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gap 00:16:27.053 13:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gap 00:16:27.310 13:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:27.310 13:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Tkw 00:16:27.310 13:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.310 13:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.310 13:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.310 13:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Tkw 00:16:27.310 13:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Tkw 00:16:27.567 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.9Q6 ]] 00:16:27.567 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9Q6 00:16:27.567 13:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.567 13:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.567 13:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.567 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9Q6 00:16:27.568 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9Q6 00:16:27.825 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:27.825 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Tin 00:16:27.825 13:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.825 13:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.825 13:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.825 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Tin 00:16:27.825 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Tin 00:16:28.083 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:28.083 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:28.083 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.083 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.083 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:28.083 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:28.341 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:28.341 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.341 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:28.341 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:28.341 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:28.341 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.341 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.341 13:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.341 13:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.342 13:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.342 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.342 13:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.599 00:16:28.599 13:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.599 13:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.599 13:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.857 13:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.857 13:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.857 13:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.857 13:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.857 13:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.857 13:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.857 { 00:16:28.857 "auth": { 00:16:28.857 "dhgroup": "null", 00:16:28.857 "digest": "sha256", 00:16:28.857 "state": "completed" 00:16:28.857 }, 00:16:28.857 "cntlid": 1, 00:16:28.857 "listen_address": { 00:16:28.857 "adrfam": "IPv4", 00:16:28.857 "traddr": "10.0.0.2", 00:16:28.857 "trsvcid": "4420", 00:16:28.857 "trtype": "TCP" 00:16:28.857 }, 00:16:28.857 "peer_address": { 00:16:28.857 "adrfam": "IPv4", 00:16:28.857 "traddr": "10.0.0.1", 00:16:28.857 "trsvcid": "51892", 00:16:28.857 "trtype": "TCP" 00:16:28.857 }, 00:16:28.857 "qid": 0, 00:16:28.857 "state": "enabled" 00:16:28.857 } 00:16:28.857 ]' 00:16:28.857 13:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.857 13:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.857 13:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.857 13:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:28.857 13:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.114 13:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.114 13:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.114 13:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.372 13:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:00:N2Y1YzYyNDk4OThiNWI3YTJlYTRlNzNkZTJjNjFmNmY0MjE5OTU4NmYzMDBhZmUyIkAOyQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiY2YyMzRkNDNhMTc4YTM5ZmMxZDQ0YTg3ODY3NDRjYTdhMmQ0MWU5NDAyZjcyN2E2N2JjN2VlYzJiMTUwNLNbInk=: 00:16:33.570 13:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.570 13:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:16:33.570 13:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.570 13:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.570 13:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.570 13:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.570 13:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:33.570 13:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:33.828 13:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:33.828 13:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.828 13:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:33.828 13:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:33.828 13:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:33.828 13:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.828 13:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.828 13:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.828 13:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.828 13:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.828 13:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.828 13:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.394 00:16:34.394 13:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.394 13:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.394 13:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.652 13:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.652 13:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.652 13:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.652 13:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.652 13:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.652 13:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.652 { 00:16:34.652 "auth": { 00:16:34.652 "dhgroup": "null", 00:16:34.652 "digest": "sha256", 00:16:34.652 "state": "completed" 00:16:34.652 }, 00:16:34.652 "cntlid": 3, 00:16:34.652 "listen_address": { 00:16:34.652 "adrfam": "IPv4", 00:16:34.652 "traddr": "10.0.0.2", 00:16:34.652 "trsvcid": "4420", 00:16:34.652 "trtype": "TCP" 00:16:34.652 }, 00:16:34.652 "peer_address": { 00:16:34.652 "adrfam": "IPv4", 00:16:34.652 "traddr": "10.0.0.1", 00:16:34.652 "trsvcid": "46670", 00:16:34.652 "trtype": "TCP" 00:16:34.652 }, 00:16:34.652 "qid": 0, 00:16:34.652 "state": "enabled" 00:16:34.652 } 00:16:34.652 ]' 00:16:34.652 13:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.652 13:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.652 13:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.652 13:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:34.652 13:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.652 13:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.652 13:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.652 13:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.910 13:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:01:YTYyNDU1MmNlNDc1MThjMjYyMGZlNzhhNzdjYjY5NmGb4/uq: --dhchap-ctrl-secret DHHC-1:02:ZjUwNWMyYjMyMGIwNDI3MmRkNDM2YzIwNzU4ZGMzYWY5ZmUxZmY2NzdiZjM0YjA5rEeJRw==: 00:16:35.847 13:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.847 13:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:16:35.847 13:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.847 13:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.847 13:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.847 13:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.847 13:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:35.847 13:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:36.105 13:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:36.105 13:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.105 13:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:36.105 13:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:36.105 13:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:36.105 13:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.105 13:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.105 13:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.105 13:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.105 13:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.105 13:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.105 13:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.363 00:16:36.363 13:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.363 13:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.363 13:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.929 13:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.929 13:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.929 13:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.929 13:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.929 13:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.929 13:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.929 { 00:16:36.929 "auth": { 00:16:36.929 "dhgroup": "null", 00:16:36.929 "digest": "sha256", 00:16:36.929 "state": "completed" 00:16:36.929 }, 00:16:36.929 "cntlid": 5, 00:16:36.929 "listen_address": { 00:16:36.929 "adrfam": "IPv4", 00:16:36.929 "traddr": "10.0.0.2", 00:16:36.929 "trsvcid": "4420", 00:16:36.929 "trtype": "TCP" 00:16:36.929 }, 00:16:36.929 "peer_address": { 00:16:36.929 "adrfam": "IPv4", 00:16:36.929 "traddr": "10.0.0.1", 00:16:36.929 "trsvcid": "46696", 00:16:36.929 "trtype": "TCP" 00:16:36.929 }, 00:16:36.929 "qid": 0, 00:16:36.929 "state": "enabled" 00:16:36.929 } 00:16:36.929 ]' 00:16:36.929 13:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.929 13:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.929 13:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.929 13:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:36.929 13:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.929 13:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.929 13:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.929 13:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.194 13:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:02:OTg3OGFlYzU1NGIxYjI2OTRlODk0M2YxZjFmZGY1ZmIxMmJkMDdiM2UxOWZjNDRmzm1SbQ==: --dhchap-ctrl-secret DHHC-1:01:ZTYxNjU2NjQzYTE1ZmE5NTQ0MWYxZmVjNjU4NjA4OGIaIdv0: 00:16:38.139 13:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.139 13:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:16:38.139 13:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.139 13:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.139 13:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.139 13:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.139 13:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:38.139 13:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:38.397 13:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:38.397 13:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.397 13:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:38.397 13:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:38.397 13:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:38.397 13:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.397 13:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key3 00:16:38.397 13:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.397 13:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.397 13:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.398 13:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.398 13:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.655 00:16:38.655 13:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.655 13:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.655 13:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.913 13:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.913 13:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.913 13:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.913 13:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.913 13:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.913 13:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.913 { 00:16:38.913 "auth": { 00:16:38.913 "dhgroup": "null", 00:16:38.913 "digest": "sha256", 00:16:38.913 "state": "completed" 00:16:38.913 }, 00:16:38.913 "cntlid": 7, 00:16:38.913 "listen_address": { 00:16:38.913 "adrfam": "IPv4", 00:16:38.913 "traddr": "10.0.0.2", 00:16:38.913 "trsvcid": "4420", 00:16:38.913 "trtype": "TCP" 00:16:38.913 }, 00:16:38.913 "peer_address": { 00:16:38.913 "adrfam": "IPv4", 00:16:38.913 "traddr": "10.0.0.1", 00:16:38.913 "trsvcid": "46724", 00:16:38.913 "trtype": "TCP" 00:16:38.913 }, 00:16:38.913 "qid": 0, 00:16:38.913 "state": "enabled" 00:16:38.913 } 00:16:38.913 ]' 00:16:38.913 13:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.913 13:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.913 13:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.171 13:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:39.171 13:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.171 13:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.171 13:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.171 13:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.430 13:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:03:ZWFkNmJmOGMyMjA4ZDI5NGJlZDFhYjg2ZjE4MmYyMGU1NmM3YzQ5NTkxMTJhOTk0MzVkZDU1N2Y5NGJiNjRmNd9JGNU=: 00:16:40.363 13:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.363 13:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:16:40.363 13:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.363 13:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.363 13:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.363 13:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.363 13:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.363 13:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:40.363 13:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:40.363 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:40.363 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.363 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:40.363 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:40.363 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:40.363 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.363 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.363 13:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.363 13:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.363 13:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.363 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.363 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.928 00:16:40.928 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.928 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.928 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.187 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.187 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.187 13:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.187 13:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.187 13:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.187 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.187 { 00:16:41.187 "auth": { 00:16:41.187 "dhgroup": "ffdhe2048", 00:16:41.187 "digest": "sha256", 00:16:41.187 "state": "completed" 00:16:41.187 }, 00:16:41.187 "cntlid": 9, 00:16:41.187 "listen_address": { 00:16:41.187 "adrfam": "IPv4", 00:16:41.187 "traddr": "10.0.0.2", 00:16:41.187 "trsvcid": "4420", 00:16:41.187 "trtype": "TCP" 00:16:41.187 }, 00:16:41.187 "peer_address": { 00:16:41.187 "adrfam": "IPv4", 00:16:41.187 "traddr": "10.0.0.1", 00:16:41.187 "trsvcid": "51094", 00:16:41.187 "trtype": "TCP" 00:16:41.187 }, 00:16:41.187 "qid": 0, 00:16:41.187 "state": "enabled" 00:16:41.187 } 00:16:41.187 ]' 00:16:41.187 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.187 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.187 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.187 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:41.187 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.187 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.187 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.187 13:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.445 13:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:00:N2Y1YzYyNDk4OThiNWI3YTJlYTRlNzNkZTJjNjFmNmY0MjE5OTU4NmYzMDBhZmUyIkAOyQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiY2YyMzRkNDNhMTc4YTM5ZmMxZDQ0YTg3ODY3NDRjYTdhMmQ0MWU5NDAyZjcyN2E2N2JjN2VlYzJiMTUwNLNbInk=: 00:16:42.380 13:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.380 13:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:16:42.380 13:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.380 13:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.380 13:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.380 13:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.380 13:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:42.380 13:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:42.668 13:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:42.668 13:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.668 13:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:42.668 13:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:42.668 13:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:42.668 13:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.669 13:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.669 13:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.669 13:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.669 13:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.669 13:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.669 13:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.926 00:16:42.926 13:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.926 13:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.926 13:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.184 13:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.184 13:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.184 13:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.184 13:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.184 13:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.184 13:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.184 { 00:16:43.184 "auth": { 00:16:43.184 "dhgroup": "ffdhe2048", 00:16:43.184 "digest": "sha256", 00:16:43.184 "state": "completed" 00:16:43.184 }, 00:16:43.184 "cntlid": 11, 00:16:43.184 "listen_address": { 00:16:43.184 "adrfam": "IPv4", 00:16:43.184 "traddr": "10.0.0.2", 00:16:43.184 "trsvcid": "4420", 00:16:43.184 "trtype": "TCP" 00:16:43.184 }, 00:16:43.184 "peer_address": { 00:16:43.184 "adrfam": "IPv4", 00:16:43.184 "traddr": "10.0.0.1", 00:16:43.184 "trsvcid": "51140", 00:16:43.184 "trtype": "TCP" 00:16:43.184 }, 00:16:43.184 "qid": 0, 00:16:43.184 "state": "enabled" 00:16:43.184 } 00:16:43.184 ]' 00:16:43.184 13:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.441 13:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.441 13:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.441 13:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:43.441 13:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.441 13:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.441 13:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.441 13:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.699 13:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:01:YTYyNDU1MmNlNDc1MThjMjYyMGZlNzhhNzdjYjY5NmGb4/uq: --dhchap-ctrl-secret DHHC-1:02:ZjUwNWMyYjMyMGIwNDI3MmRkNDM2YzIwNzU4ZGMzYWY5ZmUxZmY2NzdiZjM0YjA5rEeJRw==: 00:16:44.654 13:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.655 13:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:16:44.655 13:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.655 13:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.655 13:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.655 13:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.655 13:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:44.655 13:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:44.655 13:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:44.655 13:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.655 13:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:44.655 13:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:44.655 13:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:44.655 13:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.655 13:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.655 13:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.655 13:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.655 13:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.655 13:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.655 13:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.269 00:16:45.269 13:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.269 13:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.269 13:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.269 13:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.526 13:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.526 13:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.526 13:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.526 13:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.526 13:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.526 { 00:16:45.526 "auth": { 00:16:45.526 "dhgroup": "ffdhe2048", 00:16:45.526 "digest": "sha256", 00:16:45.526 "state": "completed" 00:16:45.526 }, 00:16:45.526 "cntlid": 13, 00:16:45.526 "listen_address": { 00:16:45.526 "adrfam": "IPv4", 00:16:45.526 "traddr": "10.0.0.2", 00:16:45.526 "trsvcid": "4420", 00:16:45.526 "trtype": "TCP" 00:16:45.526 }, 00:16:45.526 "peer_address": { 00:16:45.526 "adrfam": "IPv4", 00:16:45.526 "traddr": "10.0.0.1", 00:16:45.526 "trsvcid": "51158", 00:16:45.526 "trtype": "TCP" 00:16:45.526 }, 00:16:45.526 "qid": 0, 00:16:45.526 "state": "enabled" 00:16:45.526 } 00:16:45.526 ]' 00:16:45.526 13:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.526 13:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.526 13:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.526 13:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:45.526 13:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.526 13:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.526 13:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.526 13:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.783 13:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:02:OTg3OGFlYzU1NGIxYjI2OTRlODk0M2YxZjFmZGY1ZmIxMmJkMDdiM2UxOWZjNDRmzm1SbQ==: --dhchap-ctrl-secret DHHC-1:01:ZTYxNjU2NjQzYTE1ZmE5NTQ0MWYxZmVjNjU4NjA4OGIaIdv0: 00:16:46.717 13:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.717 13:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:16:46.717 13:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.717 13:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.717 13:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.717 13:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.717 13:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:46.717 13:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:46.975 13:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:46.975 13:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.975 13:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:46.975 13:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:46.975 13:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:46.975 13:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.975 13:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key3 00:16:46.975 13:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.975 13:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.975 13:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.975 13:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.975 13:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.233 00:16:47.233 13:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.233 13:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.233 13:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.491 13:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.491 13:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.491 13:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.491 13:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.491 13:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.491 13:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.491 { 00:16:47.491 "auth": { 00:16:47.491 "dhgroup": "ffdhe2048", 00:16:47.491 "digest": "sha256", 00:16:47.491 "state": "completed" 00:16:47.491 }, 00:16:47.491 "cntlid": 15, 00:16:47.491 "listen_address": { 00:16:47.491 "adrfam": "IPv4", 00:16:47.491 "traddr": "10.0.0.2", 00:16:47.491 "trsvcid": "4420", 00:16:47.491 "trtype": "TCP" 00:16:47.491 }, 00:16:47.491 "peer_address": { 00:16:47.491 "adrfam": "IPv4", 00:16:47.491 "traddr": "10.0.0.1", 00:16:47.491 "trsvcid": "51190", 00:16:47.491 "trtype": "TCP" 00:16:47.491 }, 00:16:47.491 "qid": 0, 00:16:47.491 "state": "enabled" 00:16:47.491 } 00:16:47.491 ]' 00:16:47.491 13:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.491 13:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.491 13:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.749 13:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:47.749 13:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.749 13:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.749 13:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.749 13:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.008 13:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:03:ZWFkNmJmOGMyMjA4ZDI5NGJlZDFhYjg2ZjE4MmYyMGU1NmM3YzQ5NTkxMTJhOTk0MzVkZDU1N2Y5NGJiNjRmNd9JGNU=: 00:16:48.941 13:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.941 13:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:16:48.941 13:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.941 13:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.941 13:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.941 13:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.941 13:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.941 13:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:48.941 13:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:48.941 13:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:48.941 13:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.941 13:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:48.941 13:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:48.941 13:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:48.941 13:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.941 13:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.941 13:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.941 13:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.941 13:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.941 13:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.941 13:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.198 00:16:49.455 13:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.455 13:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.455 13:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.713 13:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.713 13:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.713 13:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.713 13:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.713 13:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.713 13:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.713 { 00:16:49.713 "auth": { 00:16:49.713 "dhgroup": "ffdhe3072", 00:16:49.713 "digest": "sha256", 00:16:49.713 "state": "completed" 00:16:49.713 }, 00:16:49.713 "cntlid": 17, 00:16:49.713 "listen_address": { 00:16:49.713 "adrfam": "IPv4", 00:16:49.713 "traddr": "10.0.0.2", 00:16:49.713 "trsvcid": "4420", 00:16:49.713 "trtype": "TCP" 00:16:49.713 }, 00:16:49.713 "peer_address": { 00:16:49.713 "adrfam": "IPv4", 00:16:49.713 "traddr": "10.0.0.1", 00:16:49.713 "trsvcid": "51228", 00:16:49.713 "trtype": "TCP" 00:16:49.713 }, 00:16:49.713 "qid": 0, 00:16:49.713 "state": "enabled" 00:16:49.713 } 00:16:49.713 ]' 00:16:49.713 13:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.713 13:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.713 13:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.713 13:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:49.713 13:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.970 13:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.970 13:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.970 13:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.227 13:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:00:N2Y1YzYyNDk4OThiNWI3YTJlYTRlNzNkZTJjNjFmNmY0MjE5OTU4NmYzMDBhZmUyIkAOyQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiY2YyMzRkNDNhMTc4YTM5ZmMxZDQ0YTg3ODY3NDRjYTdhMmQ0MWU5NDAyZjcyN2E2N2JjN2VlYzJiMTUwNLNbInk=: 00:16:50.792 13:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.792 13:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:16:50.792 13:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.792 13:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.792 13:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.792 13:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.792 13:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.792 13:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:51.358 13:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:51.358 13:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.358 13:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:51.358 13:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:51.358 13:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:51.358 13:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.358 13:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.358 13:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.358 13:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.358 13:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.358 13:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.358 13:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.615 00:16:51.615 13:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.615 13:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.615 13:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.873 13:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.873 13:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.873 13:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.873 13:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.873 13:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.873 13:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.873 { 00:16:51.873 "auth": { 00:16:51.873 "dhgroup": "ffdhe3072", 00:16:51.873 "digest": "sha256", 00:16:51.873 "state": "completed" 00:16:51.873 }, 00:16:51.873 "cntlid": 19, 00:16:51.873 "listen_address": { 00:16:51.873 "adrfam": "IPv4", 00:16:51.873 "traddr": "10.0.0.2", 00:16:51.873 "trsvcid": "4420", 00:16:51.873 "trtype": "TCP" 00:16:51.873 }, 00:16:51.873 "peer_address": { 00:16:51.873 "adrfam": "IPv4", 00:16:51.873 "traddr": "10.0.0.1", 00:16:51.873 "trsvcid": "48644", 00:16:51.873 "trtype": "TCP" 00:16:51.873 }, 00:16:51.873 "qid": 0, 00:16:51.873 "state": "enabled" 00:16:51.873 } 00:16:51.873 ]' 00:16:51.873 13:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.873 13:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.873 13:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.873 13:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:51.873 13:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.131 13:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.131 13:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.131 13:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.388 13:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:01:YTYyNDU1MmNlNDc1MThjMjYyMGZlNzhhNzdjYjY5NmGb4/uq: --dhchap-ctrl-secret DHHC-1:02:ZjUwNWMyYjMyMGIwNDI3MmRkNDM2YzIwNzU4ZGMzYWY5ZmUxZmY2NzdiZjM0YjA5rEeJRw==: 00:16:53.064 13:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.064 13:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:16:53.064 13:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.064 13:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.064 13:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.064 13:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.064 13:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:53.064 13:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:53.322 13:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:53.322 13:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.322 13:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:53.322 13:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:53.322 13:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:53.322 13:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.322 13:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.322 13:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.322 13:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.322 13:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.322 13:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.322 13:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.885 00:16:53.885 13:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.885 13:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.885 13:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.885 13:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.885 13:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.885 13:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.885 13:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.142 13:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.142 13:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.142 { 00:16:54.142 "auth": { 00:16:54.142 "dhgroup": "ffdhe3072", 00:16:54.142 "digest": "sha256", 00:16:54.142 "state": "completed" 00:16:54.142 }, 00:16:54.142 "cntlid": 21, 00:16:54.142 "listen_address": { 00:16:54.142 "adrfam": "IPv4", 00:16:54.142 "traddr": "10.0.0.2", 00:16:54.142 "trsvcid": "4420", 00:16:54.142 "trtype": "TCP" 00:16:54.142 }, 00:16:54.142 "peer_address": { 00:16:54.142 "adrfam": "IPv4", 00:16:54.142 "traddr": "10.0.0.1", 00:16:54.142 "trsvcid": "48658", 00:16:54.142 "trtype": "TCP" 00:16:54.142 }, 00:16:54.142 "qid": 0, 00:16:54.142 "state": "enabled" 00:16:54.142 } 00:16:54.142 ]' 00:16:54.142 13:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.142 13:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.142 13:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.142 13:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:54.142 13:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.142 13:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.142 13:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.142 13:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.400 13:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:02:OTg3OGFlYzU1NGIxYjI2OTRlODk0M2YxZjFmZGY1ZmIxMmJkMDdiM2UxOWZjNDRmzm1SbQ==: --dhchap-ctrl-secret DHHC-1:01:ZTYxNjU2NjQzYTE1ZmE5NTQ0MWYxZmVjNjU4NjA4OGIaIdv0: 00:16:55.334 13:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.334 13:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:16:55.334 13:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.334 13:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.334 13:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.334 13:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.334 13:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:55.334 13:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:55.591 13:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:55.591 13:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.591 13:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:55.591 13:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:55.591 13:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:55.591 13:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.591 13:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key3 00:16:55.591 13:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.591 13:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.591 13:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.591 13:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.591 13:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.849 00:16:55.849 13:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.849 13:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.849 13:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.107 13:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.107 13:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.107 13:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.107 13:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.365 13:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.365 13:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.365 { 00:16:56.365 "auth": { 00:16:56.365 "dhgroup": "ffdhe3072", 00:16:56.365 "digest": "sha256", 00:16:56.365 "state": "completed" 00:16:56.365 }, 00:16:56.365 "cntlid": 23, 00:16:56.365 "listen_address": { 00:16:56.365 "adrfam": "IPv4", 00:16:56.365 "traddr": "10.0.0.2", 00:16:56.365 "trsvcid": "4420", 00:16:56.365 "trtype": "TCP" 00:16:56.365 }, 00:16:56.365 "peer_address": { 00:16:56.365 "adrfam": "IPv4", 00:16:56.365 "traddr": "10.0.0.1", 00:16:56.365 "trsvcid": "48684", 00:16:56.365 "trtype": "TCP" 00:16:56.365 }, 00:16:56.365 "qid": 0, 00:16:56.365 "state": "enabled" 00:16:56.365 } 00:16:56.365 ]' 00:16:56.365 13:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.365 13:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.365 13:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.365 13:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:56.365 13:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.365 13:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.365 13:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.365 13:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.661 13:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:03:ZWFkNmJmOGMyMjA4ZDI5NGJlZDFhYjg2ZjE4MmYyMGU1NmM3YzQ5NTkxMTJhOTk0MzVkZDU1N2Y5NGJiNjRmNd9JGNU=: 00:16:57.226 13:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.226 13:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:16:57.226 13:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.226 13:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.226 13:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.227 13:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.227 13:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.227 13:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:57.227 13:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:57.484 13:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:57.484 13:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.484 13:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:57.484 13:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:57.484 13:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:57.484 13:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.484 13:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.484 13:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.484 13:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.484 13:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.484 13:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.484 13:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.049 00:16:58.049 13:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.049 13:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.049 13:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.325 13:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.325 13:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.325 13:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.325 13:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.325 13:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.325 13:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.325 { 00:16:58.325 "auth": { 00:16:58.325 "dhgroup": "ffdhe4096", 00:16:58.325 "digest": "sha256", 00:16:58.325 "state": "completed" 00:16:58.325 }, 00:16:58.325 "cntlid": 25, 00:16:58.325 "listen_address": { 00:16:58.325 "adrfam": "IPv4", 00:16:58.325 "traddr": "10.0.0.2", 00:16:58.325 "trsvcid": "4420", 00:16:58.325 "trtype": "TCP" 00:16:58.325 }, 00:16:58.325 "peer_address": { 00:16:58.325 "adrfam": "IPv4", 00:16:58.325 "traddr": "10.0.0.1", 00:16:58.325 "trsvcid": "48718", 00:16:58.325 "trtype": "TCP" 00:16:58.325 }, 00:16:58.325 "qid": 0, 00:16:58.325 "state": "enabled" 00:16:58.325 } 00:16:58.325 ]' 00:16:58.325 13:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.325 13:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.325 13:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.608 13:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:58.608 13:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.608 13:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.608 13:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.608 13:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.867 13:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:00:N2Y1YzYyNDk4OThiNWI3YTJlYTRlNzNkZTJjNjFmNmY0MjE5OTU4NmYzMDBhZmUyIkAOyQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiY2YyMzRkNDNhMTc4YTM5ZmMxZDQ0YTg3ODY3NDRjYTdhMmQ0MWU5NDAyZjcyN2E2N2JjN2VlYzJiMTUwNLNbInk=: 00:16:59.433 13:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.433 13:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:16:59.433 13:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.433 13:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.433 13:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.433 13:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.433 13:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:59.433 13:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:59.997 13:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:59.997 13:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.997 13:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:59.997 13:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:59.997 13:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:59.997 13:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.997 13:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.998 13:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.998 13:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.998 13:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.998 13:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.998 13:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.255 00:17:00.255 13:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.255 13:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.255 13:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.512 13:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.512 13:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.512 13:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.512 13:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.512 13:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.512 13:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.512 { 00:17:00.512 "auth": { 00:17:00.512 "dhgroup": "ffdhe4096", 00:17:00.512 "digest": "sha256", 00:17:00.512 "state": "completed" 00:17:00.512 }, 00:17:00.512 "cntlid": 27, 00:17:00.512 "listen_address": { 00:17:00.512 "adrfam": "IPv4", 00:17:00.512 "traddr": "10.0.0.2", 00:17:00.512 "trsvcid": "4420", 00:17:00.512 "trtype": "TCP" 00:17:00.512 }, 00:17:00.512 "peer_address": { 00:17:00.512 "adrfam": "IPv4", 00:17:00.512 "traddr": "10.0.0.1", 00:17:00.512 "trsvcid": "48746", 00:17:00.512 "trtype": "TCP" 00:17:00.512 }, 00:17:00.512 "qid": 0, 00:17:00.512 "state": "enabled" 00:17:00.512 } 00:17:00.512 ]' 00:17:00.512 13:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.512 13:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.512 13:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.512 13:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:00.512 13:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.770 13:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.770 13:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.770 13:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.028 13:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:01:YTYyNDU1MmNlNDc1MThjMjYyMGZlNzhhNzdjYjY5NmGb4/uq: --dhchap-ctrl-secret DHHC-1:02:ZjUwNWMyYjMyMGIwNDI3MmRkNDM2YzIwNzU4ZGMzYWY5ZmUxZmY2NzdiZjM0YjA5rEeJRw==: 00:17:01.594 13:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.594 13:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:01.594 13:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.594 13:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.594 13:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.594 13:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.594 13:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:01.594 13:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:01.852 13:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:01.852 13:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.852 13:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:01.852 13:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:01.852 13:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:01.852 13:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.852 13:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.852 13:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.852 13:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.110 13:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.110 13:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.110 13:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.367 00:17:02.367 13:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.367 13:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.367 13:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.625 13:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.625 13:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.625 13:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.625 13:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.625 13:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.625 13:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.625 { 00:17:02.625 "auth": { 00:17:02.625 "dhgroup": "ffdhe4096", 00:17:02.625 "digest": "sha256", 00:17:02.625 "state": "completed" 00:17:02.625 }, 00:17:02.625 "cntlid": 29, 00:17:02.625 "listen_address": { 00:17:02.625 "adrfam": "IPv4", 00:17:02.625 "traddr": "10.0.0.2", 00:17:02.625 "trsvcid": "4420", 00:17:02.625 "trtype": "TCP" 00:17:02.625 }, 00:17:02.625 "peer_address": { 00:17:02.625 "adrfam": "IPv4", 00:17:02.625 "traddr": "10.0.0.1", 00:17:02.625 "trsvcid": "42476", 00:17:02.625 "trtype": "TCP" 00:17:02.625 }, 00:17:02.625 "qid": 0, 00:17:02.625 "state": "enabled" 00:17:02.625 } 00:17:02.625 ]' 00:17:02.625 13:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.625 13:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.625 13:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.625 13:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:02.625 13:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.883 13:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.883 13:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.883 13:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.140 13:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:02:OTg3OGFlYzU1NGIxYjI2OTRlODk0M2YxZjFmZGY1ZmIxMmJkMDdiM2UxOWZjNDRmzm1SbQ==: --dhchap-ctrl-secret DHHC-1:01:ZTYxNjU2NjQzYTE1ZmE5NTQ0MWYxZmVjNjU4NjA4OGIaIdv0: 00:17:03.707 13:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.708 13:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:03.708 13:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.708 13:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.708 13:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.708 13:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.708 13:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:03.708 13:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:03.965 13:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:03.965 13:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.965 13:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:03.965 13:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:03.965 13:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:03.965 13:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.965 13:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key3 00:17:03.965 13:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.965 13:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.222 13:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.222 13:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.222 13:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.480 00:17:04.480 13:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.480 13:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.480 13:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.737 13:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.737 13:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.737 13:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.737 13:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.737 13:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.737 13:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.737 { 00:17:04.737 "auth": { 00:17:04.737 "dhgroup": "ffdhe4096", 00:17:04.737 "digest": "sha256", 00:17:04.737 "state": "completed" 00:17:04.737 }, 00:17:04.737 "cntlid": 31, 00:17:04.737 "listen_address": { 00:17:04.737 "adrfam": "IPv4", 00:17:04.737 "traddr": "10.0.0.2", 00:17:04.737 "trsvcid": "4420", 00:17:04.737 "trtype": "TCP" 00:17:04.737 }, 00:17:04.737 "peer_address": { 00:17:04.737 "adrfam": "IPv4", 00:17:04.737 "traddr": "10.0.0.1", 00:17:04.737 "trsvcid": "42492", 00:17:04.737 "trtype": "TCP" 00:17:04.737 }, 00:17:04.737 "qid": 0, 00:17:04.737 "state": "enabled" 00:17:04.737 } 00:17:04.737 ]' 00:17:04.737 13:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.737 13:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.737 13:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.737 13:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:04.737 13:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.012 13:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.012 13:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.012 13:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.290 13:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:03:ZWFkNmJmOGMyMjA4ZDI5NGJlZDFhYjg2ZjE4MmYyMGU1NmM3YzQ5NTkxMTJhOTk0MzVkZDU1N2Y5NGJiNjRmNd9JGNU=: 00:17:05.858 13:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.858 13:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:05.858 13:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.858 13:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.858 13:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.858 13:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.858 13:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.858 13:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:05.858 13:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:06.116 13:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:06.116 13:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.116 13:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:06.116 13:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:06.116 13:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:06.116 13:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.116 13:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.116 13:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.116 13:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.116 13:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.116 13:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.116 13:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.681 00:17:06.681 13:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.681 13:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.681 13:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.939 13:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.939 13:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.939 13:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.939 13:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.939 13:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.939 13:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.939 { 00:17:06.939 "auth": { 00:17:06.939 "dhgroup": "ffdhe6144", 00:17:06.939 "digest": "sha256", 00:17:06.939 "state": "completed" 00:17:06.939 }, 00:17:06.939 "cntlid": 33, 00:17:06.939 "listen_address": { 00:17:06.939 "adrfam": "IPv4", 00:17:06.939 "traddr": "10.0.0.2", 00:17:06.939 "trsvcid": "4420", 00:17:06.939 "trtype": "TCP" 00:17:06.939 }, 00:17:06.939 "peer_address": { 00:17:06.939 "adrfam": "IPv4", 00:17:06.939 "traddr": "10.0.0.1", 00:17:06.939 "trsvcid": "42526", 00:17:06.939 "trtype": "TCP" 00:17:06.939 }, 00:17:06.939 "qid": 0, 00:17:06.939 "state": "enabled" 00:17:06.939 } 00:17:06.939 ]' 00:17:06.939 13:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.197 13:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.197 13:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.197 13:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:07.197 13:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.197 13:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.197 13:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.197 13:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.455 13:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:00:N2Y1YzYyNDk4OThiNWI3YTJlYTRlNzNkZTJjNjFmNmY0MjE5OTU4NmYzMDBhZmUyIkAOyQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiY2YyMzRkNDNhMTc4YTM5ZmMxZDQ0YTg3ODY3NDRjYTdhMmQ0MWU5NDAyZjcyN2E2N2JjN2VlYzJiMTUwNLNbInk=: 00:17:08.021 13:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.021 13:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:08.021 13:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.021 13:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.021 13:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.021 13:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.021 13:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:08.021 13:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:08.587 13:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:08.587 13:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.587 13:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:08.587 13:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:08.587 13:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:08.587 13:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.587 13:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.587 13:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.587 13:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.587 13:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.587 13:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.587 13:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.845 00:17:08.845 13:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.845 13:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.845 13:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.409 13:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.409 13:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.409 13:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.409 13:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.409 13:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.409 13:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.409 { 00:17:09.409 "auth": { 00:17:09.409 "dhgroup": "ffdhe6144", 00:17:09.409 "digest": "sha256", 00:17:09.410 "state": "completed" 00:17:09.410 }, 00:17:09.410 "cntlid": 35, 00:17:09.410 "listen_address": { 00:17:09.410 "adrfam": "IPv4", 00:17:09.410 "traddr": "10.0.0.2", 00:17:09.410 "trsvcid": "4420", 00:17:09.410 "trtype": "TCP" 00:17:09.410 }, 00:17:09.410 "peer_address": { 00:17:09.410 "adrfam": "IPv4", 00:17:09.410 "traddr": "10.0.0.1", 00:17:09.410 "trsvcid": "42536", 00:17:09.410 "trtype": "TCP" 00:17:09.410 }, 00:17:09.410 "qid": 0, 00:17:09.410 "state": "enabled" 00:17:09.410 } 00:17:09.410 ]' 00:17:09.410 13:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.410 13:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.410 13:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.410 13:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:09.410 13:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.410 13:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.410 13:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.410 13:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.666 13:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:01:YTYyNDU1MmNlNDc1MThjMjYyMGZlNzhhNzdjYjY5NmGb4/uq: --dhchap-ctrl-secret DHHC-1:02:ZjUwNWMyYjMyMGIwNDI3MmRkNDM2YzIwNzU4ZGMzYWY5ZmUxZmY2NzdiZjM0YjA5rEeJRw==: 00:17:10.598 13:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.598 13:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:10.598 13:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.598 13:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.598 13:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.598 13:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.598 13:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:10.598 13:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:10.861 13:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:10.861 13:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.861 13:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:10.861 13:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:10.861 13:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:10.861 13:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.861 13:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.861 13:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.861 13:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.861 13:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.861 13:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.861 13:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.480 00:17:11.480 13:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.480 13:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.480 13:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.738 13:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.738 13:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.738 13:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.738 13:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.738 13:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.738 13:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.738 { 00:17:11.738 "auth": { 00:17:11.738 "dhgroup": "ffdhe6144", 00:17:11.738 "digest": "sha256", 00:17:11.738 "state": "completed" 00:17:11.738 }, 00:17:11.738 "cntlid": 37, 00:17:11.738 "listen_address": { 00:17:11.738 "adrfam": "IPv4", 00:17:11.738 "traddr": "10.0.0.2", 00:17:11.738 "trsvcid": "4420", 00:17:11.738 "trtype": "TCP" 00:17:11.738 }, 00:17:11.738 "peer_address": { 00:17:11.738 "adrfam": "IPv4", 00:17:11.738 "traddr": "10.0.0.1", 00:17:11.738 "trsvcid": "54644", 00:17:11.738 "trtype": "TCP" 00:17:11.738 }, 00:17:11.738 "qid": 0, 00:17:11.738 "state": "enabled" 00:17:11.738 } 00:17:11.738 ]' 00:17:11.738 13:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.738 13:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.738 13:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.738 13:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:11.738 13:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.738 13:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.738 13:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.738 13:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.996 13:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:02:OTg3OGFlYzU1NGIxYjI2OTRlODk0M2YxZjFmZGY1ZmIxMmJkMDdiM2UxOWZjNDRmzm1SbQ==: --dhchap-ctrl-secret DHHC-1:01:ZTYxNjU2NjQzYTE1ZmE5NTQ0MWYxZmVjNjU4NjA4OGIaIdv0: 00:17:12.929 13:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.929 13:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:12.929 13:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.929 13:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.929 13:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.929 13:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.929 13:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:12.929 13:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:13.188 13:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:13.188 13:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.188 13:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:13.188 13:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:13.188 13:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:13.188 13:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.188 13:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key3 00:17:13.188 13:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.188 13:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.188 13:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.188 13:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.188 13:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.754 00:17:13.754 13:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.754 13:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.754 13:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.012 13:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.012 13:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.012 13:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.012 13:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.012 13:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.012 13:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.012 { 00:17:14.012 "auth": { 00:17:14.012 "dhgroup": "ffdhe6144", 00:17:14.012 "digest": "sha256", 00:17:14.012 "state": "completed" 00:17:14.012 }, 00:17:14.012 "cntlid": 39, 00:17:14.012 "listen_address": { 00:17:14.012 "adrfam": "IPv4", 00:17:14.012 "traddr": "10.0.0.2", 00:17:14.012 "trsvcid": "4420", 00:17:14.012 "trtype": "TCP" 00:17:14.012 }, 00:17:14.012 "peer_address": { 00:17:14.012 "adrfam": "IPv4", 00:17:14.012 "traddr": "10.0.0.1", 00:17:14.012 "trsvcid": "54674", 00:17:14.012 "trtype": "TCP" 00:17:14.012 }, 00:17:14.012 "qid": 0, 00:17:14.012 "state": "enabled" 00:17:14.012 } 00:17:14.012 ]' 00:17:14.012 13:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.012 13:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.012 13:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.012 13:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:14.012 13:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.012 13:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.012 13:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.012 13:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.579 13:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:03:ZWFkNmJmOGMyMjA4ZDI5NGJlZDFhYjg2ZjE4MmYyMGU1NmM3YzQ5NTkxMTJhOTk0MzVkZDU1N2Y5NGJiNjRmNd9JGNU=: 00:17:15.145 13:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.145 13:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:15.145 13:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.145 13:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.145 13:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.145 13:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.145 13:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.145 13:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.146 13:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.404 13:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:15.404 13:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.404 13:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:15.404 13:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:15.404 13:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:15.404 13:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.404 13:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.404 13:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.404 13:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.404 13:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.404 13:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.404 13:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.338 00:17:16.338 13:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.338 13:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.338 13:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.596 13:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.596 13:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.596 13:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.596 13:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.596 13:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.596 13:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.596 { 00:17:16.596 "auth": { 00:17:16.596 "dhgroup": "ffdhe8192", 00:17:16.596 "digest": "sha256", 00:17:16.596 "state": "completed" 00:17:16.596 }, 00:17:16.596 "cntlid": 41, 00:17:16.596 "listen_address": { 00:17:16.596 "adrfam": "IPv4", 00:17:16.596 "traddr": "10.0.0.2", 00:17:16.596 "trsvcid": "4420", 00:17:16.596 "trtype": "TCP" 00:17:16.596 }, 00:17:16.596 "peer_address": { 00:17:16.596 "adrfam": "IPv4", 00:17:16.596 "traddr": "10.0.0.1", 00:17:16.596 "trsvcid": "54708", 00:17:16.596 "trtype": "TCP" 00:17:16.596 }, 00:17:16.596 "qid": 0, 00:17:16.596 "state": "enabled" 00:17:16.596 } 00:17:16.596 ]' 00:17:16.596 13:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.596 13:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.596 13:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.596 13:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:16.596 13:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.596 13:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.596 13:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.596 13:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.853 13:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:00:N2Y1YzYyNDk4OThiNWI3YTJlYTRlNzNkZTJjNjFmNmY0MjE5OTU4NmYzMDBhZmUyIkAOyQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiY2YyMzRkNDNhMTc4YTM5ZmMxZDQ0YTg3ODY3NDRjYTdhMmQ0MWU5NDAyZjcyN2E2N2JjN2VlYzJiMTUwNLNbInk=: 00:17:17.787 13:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.787 13:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:17.787 13:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.787 13:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.787 13:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.787 13:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.787 13:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:17.787 13:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:18.046 13:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:18.046 13:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.046 13:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:18.046 13:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:18.046 13:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:18.046 13:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.046 13:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.046 13:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.046 13:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.046 13:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.046 13:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.046 13:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.979 00:17:18.979 13:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.979 13:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.979 13:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.979 13:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.979 13:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.979 13:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.979 13:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.979 13:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.979 13:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.979 { 00:17:18.979 "auth": { 00:17:18.979 "dhgroup": "ffdhe8192", 00:17:18.979 "digest": "sha256", 00:17:18.979 "state": "completed" 00:17:18.979 }, 00:17:18.979 "cntlid": 43, 00:17:18.979 "listen_address": { 00:17:18.979 "adrfam": "IPv4", 00:17:18.979 "traddr": "10.0.0.2", 00:17:18.979 "trsvcid": "4420", 00:17:18.979 "trtype": "TCP" 00:17:18.979 }, 00:17:18.979 "peer_address": { 00:17:18.979 "adrfam": "IPv4", 00:17:18.979 "traddr": "10.0.0.1", 00:17:18.979 "trsvcid": "54734", 00:17:18.979 "trtype": "TCP" 00:17:18.979 }, 00:17:18.979 "qid": 0, 00:17:18.979 "state": "enabled" 00:17:18.979 } 00:17:18.979 ]' 00:17:18.979 13:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.238 13:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.238 13:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.238 13:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.238 13:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.238 13:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.238 13:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.238 13:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.496 13:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:01:YTYyNDU1MmNlNDc1MThjMjYyMGZlNzhhNzdjYjY5NmGb4/uq: --dhchap-ctrl-secret DHHC-1:02:ZjUwNWMyYjMyMGIwNDI3MmRkNDM2YzIwNzU4ZGMzYWY5ZmUxZmY2NzdiZjM0YjA5rEeJRw==: 00:17:20.430 13:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.430 13:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:20.430 13:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.430 13:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.430 13:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.430 13:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.430 13:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:20.430 13:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:20.430 13:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:20.430 13:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.430 13:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:20.430 13:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:20.430 13:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:20.430 13:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.430 13:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.430 13:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.430 13:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.430 13:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.430 13:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.430 13:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.362 00:17:21.362 13:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.362 13:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.362 13:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.620 13:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.620 13:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.620 13:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.620 13:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.620 13:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.620 13:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.620 { 00:17:21.620 "auth": { 00:17:21.620 "dhgroup": "ffdhe8192", 00:17:21.620 "digest": "sha256", 00:17:21.620 "state": "completed" 00:17:21.620 }, 00:17:21.620 "cntlid": 45, 00:17:21.620 "listen_address": { 00:17:21.620 "adrfam": "IPv4", 00:17:21.620 "traddr": "10.0.0.2", 00:17:21.620 "trsvcid": "4420", 00:17:21.620 "trtype": "TCP" 00:17:21.620 }, 00:17:21.620 "peer_address": { 00:17:21.620 "adrfam": "IPv4", 00:17:21.620 "traddr": "10.0.0.1", 00:17:21.620 "trsvcid": "51264", 00:17:21.620 "trtype": "TCP" 00:17:21.620 }, 00:17:21.620 "qid": 0, 00:17:21.620 "state": "enabled" 00:17:21.620 } 00:17:21.620 ]' 00:17:21.620 13:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.620 13:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.620 13:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.620 13:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:21.620 13:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.620 13:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.620 13:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.620 13:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.877 13:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:02:OTg3OGFlYzU1NGIxYjI2OTRlODk0M2YxZjFmZGY1ZmIxMmJkMDdiM2UxOWZjNDRmzm1SbQ==: --dhchap-ctrl-secret DHHC-1:01:ZTYxNjU2NjQzYTE1ZmE5NTQ0MWYxZmVjNjU4NjA4OGIaIdv0: 00:17:22.872 13:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.873 13:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:22.873 13:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.873 13:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.873 13:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.873 13:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.873 13:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:22.873 13:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:22.873 13:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:22.873 13:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.873 13:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:22.873 13:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:22.873 13:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:22.873 13:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.873 13:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key3 00:17:22.873 13:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.873 13:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.130 13:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.130 13:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.130 13:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.697 00:17:23.697 13:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.697 13:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.697 13:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.955 13:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.955 13:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.955 13:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.955 13:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.955 13:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.955 13:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.955 { 00:17:23.955 "auth": { 00:17:23.955 "dhgroup": "ffdhe8192", 00:17:23.955 "digest": "sha256", 00:17:23.955 "state": "completed" 00:17:23.955 }, 00:17:23.955 "cntlid": 47, 00:17:23.955 "listen_address": { 00:17:23.955 "adrfam": "IPv4", 00:17:23.955 "traddr": "10.0.0.2", 00:17:23.955 "trsvcid": "4420", 00:17:23.955 "trtype": "TCP" 00:17:23.955 }, 00:17:23.955 "peer_address": { 00:17:23.955 "adrfam": "IPv4", 00:17:23.955 "traddr": "10.0.0.1", 00:17:23.955 "trsvcid": "51280", 00:17:23.955 "trtype": "TCP" 00:17:23.955 }, 00:17:23.955 "qid": 0, 00:17:23.955 "state": "enabled" 00:17:23.955 } 00:17:23.955 ]' 00:17:23.955 13:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.955 13:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.955 13:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.955 13:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:23.955 13:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.955 13:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.955 13:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.955 13:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.521 13:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:03:ZWFkNmJmOGMyMjA4ZDI5NGJlZDFhYjg2ZjE4MmYyMGU1NmM3YzQ5NTkxMTJhOTk0MzVkZDU1N2Y5NGJiNjRmNd9JGNU=: 00:17:25.088 13:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.088 13:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:25.088 13:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.088 13:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.088 13:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.088 13:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:25.088 13:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.088 13:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.088 13:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:25.088 13:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:25.345 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:25.345 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.345 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:25.345 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:25.345 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:25.345 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.346 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.346 13:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.346 13:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.346 13:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.346 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.346 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.603 00:17:25.603 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.603 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.603 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.169 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.169 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.169 13:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.169 13:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.169 13:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.169 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.169 { 00:17:26.169 "auth": { 00:17:26.169 "dhgroup": "null", 00:17:26.169 "digest": "sha384", 00:17:26.169 "state": "completed" 00:17:26.169 }, 00:17:26.169 "cntlid": 49, 00:17:26.169 "listen_address": { 00:17:26.169 "adrfam": "IPv4", 00:17:26.169 "traddr": "10.0.0.2", 00:17:26.169 "trsvcid": "4420", 00:17:26.169 "trtype": "TCP" 00:17:26.169 }, 00:17:26.169 "peer_address": { 00:17:26.169 "adrfam": "IPv4", 00:17:26.169 "traddr": "10.0.0.1", 00:17:26.169 "trsvcid": "51292", 00:17:26.169 "trtype": "TCP" 00:17:26.169 }, 00:17:26.169 "qid": 0, 00:17:26.169 "state": "enabled" 00:17:26.169 } 00:17:26.169 ]' 00:17:26.169 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.169 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.169 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.169 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:26.169 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.169 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.169 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.169 13:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.427 13:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:00:N2Y1YzYyNDk4OThiNWI3YTJlYTRlNzNkZTJjNjFmNmY0MjE5OTU4NmYzMDBhZmUyIkAOyQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiY2YyMzRkNDNhMTc4YTM5ZmMxZDQ0YTg3ODY3NDRjYTdhMmQ0MWU5NDAyZjcyN2E2N2JjN2VlYzJiMTUwNLNbInk=: 00:17:27.360 13:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.360 13:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:27.360 13:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.360 13:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.360 13:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.360 13:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.360 13:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:27.360 13:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:27.360 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:27.360 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.360 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:27.360 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:27.360 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:27.360 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.361 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.361 13:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.361 13:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.361 13:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.361 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.361 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.650 00:17:27.650 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.650 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.650 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.221 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.221 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.221 13:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.221 13:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.221 13:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.221 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.221 { 00:17:28.221 "auth": { 00:17:28.221 "dhgroup": "null", 00:17:28.221 "digest": "sha384", 00:17:28.221 "state": "completed" 00:17:28.221 }, 00:17:28.221 "cntlid": 51, 00:17:28.221 "listen_address": { 00:17:28.221 "adrfam": "IPv4", 00:17:28.221 "traddr": "10.0.0.2", 00:17:28.221 "trsvcid": "4420", 00:17:28.221 "trtype": "TCP" 00:17:28.221 }, 00:17:28.221 "peer_address": { 00:17:28.221 "adrfam": "IPv4", 00:17:28.221 "traddr": "10.0.0.1", 00:17:28.221 "trsvcid": "51324", 00:17:28.221 "trtype": "TCP" 00:17:28.221 }, 00:17:28.221 "qid": 0, 00:17:28.221 "state": "enabled" 00:17:28.221 } 00:17:28.221 ]' 00:17:28.221 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.221 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.221 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.221 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:28.221 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.221 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.221 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.221 13:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.478 13:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:01:YTYyNDU1MmNlNDc1MThjMjYyMGZlNzhhNzdjYjY5NmGb4/uq: --dhchap-ctrl-secret DHHC-1:02:ZjUwNWMyYjMyMGIwNDI3MmRkNDM2YzIwNzU4ZGMzYWY5ZmUxZmY2NzdiZjM0YjA5rEeJRw==: 00:17:29.042 13:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.299 13:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:29.299 13:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.299 13:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.299 13:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.299 13:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.299 13:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:29.299 13:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:29.556 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:29.556 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.556 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:29.556 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:29.556 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:29.556 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.556 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.556 13:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.556 13:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.556 13:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.556 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.556 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.814 00:17:29.814 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.814 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.814 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.071 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.071 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.071 13:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.071 13:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.071 13:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.071 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.071 { 00:17:30.071 "auth": { 00:17:30.071 "dhgroup": "null", 00:17:30.071 "digest": "sha384", 00:17:30.071 "state": "completed" 00:17:30.071 }, 00:17:30.071 "cntlid": 53, 00:17:30.071 "listen_address": { 00:17:30.071 "adrfam": "IPv4", 00:17:30.071 "traddr": "10.0.0.2", 00:17:30.071 "trsvcid": "4420", 00:17:30.071 "trtype": "TCP" 00:17:30.071 }, 00:17:30.071 "peer_address": { 00:17:30.071 "adrfam": "IPv4", 00:17:30.071 "traddr": "10.0.0.1", 00:17:30.071 "trsvcid": "51366", 00:17:30.071 "trtype": "TCP" 00:17:30.071 }, 00:17:30.071 "qid": 0, 00:17:30.071 "state": "enabled" 00:17:30.071 } 00:17:30.071 ]' 00:17:30.071 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.329 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.329 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.329 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:30.329 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.329 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.329 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.329 13:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.586 13:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:02:OTg3OGFlYzU1NGIxYjI2OTRlODk0M2YxZjFmZGY1ZmIxMmJkMDdiM2UxOWZjNDRmzm1SbQ==: --dhchap-ctrl-secret DHHC-1:01:ZTYxNjU2NjQzYTE1ZmE5NTQ0MWYxZmVjNjU4NjA4OGIaIdv0: 00:17:31.519 13:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.519 13:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:31.519 13:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.519 13:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.519 13:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.519 13:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.519 13:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:31.519 13:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:31.519 13:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:31.519 13:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.519 13:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:31.519 13:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:31.519 13:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:31.519 13:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.519 13:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key3 00:17:31.519 13:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.519 13:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.778 13:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.778 13:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.778 13:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.036 00:17:32.036 13:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.036 13:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.036 13:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.294 13:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.294 13:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.294 13:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.294 13:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.294 13:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.294 13:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.294 { 00:17:32.294 "auth": { 00:17:32.294 "dhgroup": "null", 00:17:32.294 "digest": "sha384", 00:17:32.294 "state": "completed" 00:17:32.294 }, 00:17:32.294 "cntlid": 55, 00:17:32.294 "listen_address": { 00:17:32.294 "adrfam": "IPv4", 00:17:32.294 "traddr": "10.0.0.2", 00:17:32.294 "trsvcid": "4420", 00:17:32.294 "trtype": "TCP" 00:17:32.294 }, 00:17:32.294 "peer_address": { 00:17:32.294 "adrfam": "IPv4", 00:17:32.294 "traddr": "10.0.0.1", 00:17:32.294 "trsvcid": "46692", 00:17:32.294 "trtype": "TCP" 00:17:32.294 }, 00:17:32.294 "qid": 0, 00:17:32.294 "state": "enabled" 00:17:32.294 } 00:17:32.294 ]' 00:17:32.294 13:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.294 13:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.294 13:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.294 13:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:32.294 13:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.552 13:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.552 13:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.552 13:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.813 13:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:03:ZWFkNmJmOGMyMjA4ZDI5NGJlZDFhYjg2ZjE4MmYyMGU1NmM3YzQ5NTkxMTJhOTk0MzVkZDU1N2Y5NGJiNjRmNd9JGNU=: 00:17:33.385 13:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.385 13:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:33.385 13:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.385 13:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.385 13:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.385 13:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.385 13:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.385 13:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:33.385 13:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:33.642 13:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:33.642 13:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.642 13:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:33.642 13:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:33.642 13:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:33.642 13:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.642 13:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.642 13:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.642 13:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.642 13:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.642 13:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.642 13:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.208 00:17:34.208 13:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.208 13:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.208 13:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.467 13:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.467 13:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.467 13:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.467 13:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.467 13:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.467 13:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.467 { 00:17:34.467 "auth": { 00:17:34.467 "dhgroup": "ffdhe2048", 00:17:34.467 "digest": "sha384", 00:17:34.467 "state": "completed" 00:17:34.467 }, 00:17:34.467 "cntlid": 57, 00:17:34.467 "listen_address": { 00:17:34.467 "adrfam": "IPv4", 00:17:34.467 "traddr": "10.0.0.2", 00:17:34.467 "trsvcid": "4420", 00:17:34.467 "trtype": "TCP" 00:17:34.467 }, 00:17:34.467 "peer_address": { 00:17:34.467 "adrfam": "IPv4", 00:17:34.467 "traddr": "10.0.0.1", 00:17:34.467 "trsvcid": "46714", 00:17:34.467 "trtype": "TCP" 00:17:34.467 }, 00:17:34.467 "qid": 0, 00:17:34.467 "state": "enabled" 00:17:34.467 } 00:17:34.467 ]' 00:17:34.467 13:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.467 13:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.467 13:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.467 13:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:34.467 13:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.467 13:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.467 13:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.467 13:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.726 13:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:00:N2Y1YzYyNDk4OThiNWI3YTJlYTRlNzNkZTJjNjFmNmY0MjE5OTU4NmYzMDBhZmUyIkAOyQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiY2YyMzRkNDNhMTc4YTM5ZmMxZDQ0YTg3ODY3NDRjYTdhMmQ0MWU5NDAyZjcyN2E2N2JjN2VlYzJiMTUwNLNbInk=: 00:17:35.292 13:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.550 13:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:35.550 13:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.550 13:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.550 13:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.550 13:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.550 13:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:35.550 13:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:35.809 13:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:35.809 13:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.809 13:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:35.809 13:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:35.809 13:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:35.809 13:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.809 13:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.809 13:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.809 13:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.809 13:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.809 13:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.809 13:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.067 00:17:36.067 13:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.067 13:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.067 13:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.325 13:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.325 13:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.325 13:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.325 13:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.325 13:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.325 13:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.325 { 00:17:36.325 "auth": { 00:17:36.325 "dhgroup": "ffdhe2048", 00:17:36.325 "digest": "sha384", 00:17:36.325 "state": "completed" 00:17:36.325 }, 00:17:36.325 "cntlid": 59, 00:17:36.325 "listen_address": { 00:17:36.325 "adrfam": "IPv4", 00:17:36.325 "traddr": "10.0.0.2", 00:17:36.325 "trsvcid": "4420", 00:17:36.325 "trtype": "TCP" 00:17:36.325 }, 00:17:36.325 "peer_address": { 00:17:36.325 "adrfam": "IPv4", 00:17:36.325 "traddr": "10.0.0.1", 00:17:36.325 "trsvcid": "46746", 00:17:36.325 "trtype": "TCP" 00:17:36.325 }, 00:17:36.325 "qid": 0, 00:17:36.325 "state": "enabled" 00:17:36.325 } 00:17:36.325 ]' 00:17:36.325 13:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.582 13:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.582 13:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.582 13:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:36.582 13:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.582 13:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.582 13:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.582 13:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.839 13:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:01:YTYyNDU1MmNlNDc1MThjMjYyMGZlNzhhNzdjYjY5NmGb4/uq: --dhchap-ctrl-secret DHHC-1:02:ZjUwNWMyYjMyMGIwNDI3MmRkNDM2YzIwNzU4ZGMzYWY5ZmUxZmY2NzdiZjM0YjA5rEeJRw==: 00:17:37.770 13:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.770 13:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:37.770 13:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.770 13:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.770 13:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.770 13:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.770 13:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:37.770 13:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.027 13:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:38.027 13:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.027 13:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:38.027 13:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:38.027 13:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:38.027 13:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.027 13:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.027 13:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.027 13:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.027 13:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.027 13:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.027 13:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.289 00:17:38.289 13:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.289 13:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.289 13:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.547 13:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.547 13:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.547 13:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.547 13:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.547 13:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.547 13:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.547 { 00:17:38.547 "auth": { 00:17:38.547 "dhgroup": "ffdhe2048", 00:17:38.547 "digest": "sha384", 00:17:38.547 "state": "completed" 00:17:38.547 }, 00:17:38.547 "cntlid": 61, 00:17:38.547 "listen_address": { 00:17:38.547 "adrfam": "IPv4", 00:17:38.547 "traddr": "10.0.0.2", 00:17:38.547 "trsvcid": "4420", 00:17:38.547 "trtype": "TCP" 00:17:38.547 }, 00:17:38.547 "peer_address": { 00:17:38.547 "adrfam": "IPv4", 00:17:38.547 "traddr": "10.0.0.1", 00:17:38.547 "trsvcid": "46780", 00:17:38.547 "trtype": "TCP" 00:17:38.547 }, 00:17:38.547 "qid": 0, 00:17:38.547 "state": "enabled" 00:17:38.547 } 00:17:38.547 ]' 00:17:38.547 13:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.547 13:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.547 13:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.805 13:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:38.805 13:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.805 13:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.805 13:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.805 13:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.063 13:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:02:OTg3OGFlYzU1NGIxYjI2OTRlODk0M2YxZjFmZGY1ZmIxMmJkMDdiM2UxOWZjNDRmzm1SbQ==: --dhchap-ctrl-secret DHHC-1:01:ZTYxNjU2NjQzYTE1ZmE5NTQ0MWYxZmVjNjU4NjA4OGIaIdv0: 00:17:39.629 13:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.629 13:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:39.629 13:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.629 13:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.629 13:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.629 13:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.629 13:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:39.629 13:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:39.887 13:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:39.887 13:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.887 13:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:39.887 13:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:39.887 13:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:39.887 13:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.887 13:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key3 00:17:39.887 13:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.887 13:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.887 13:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.887 13:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.887 13:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.452 00:17:40.452 13:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.452 13:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.452 13:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.710 13:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.710 13:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.710 13:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.710 13:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.710 13:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.710 13:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.710 { 00:17:40.710 "auth": { 00:17:40.710 "dhgroup": "ffdhe2048", 00:17:40.710 "digest": "sha384", 00:17:40.710 "state": "completed" 00:17:40.710 }, 00:17:40.710 "cntlid": 63, 00:17:40.710 "listen_address": { 00:17:40.710 "adrfam": "IPv4", 00:17:40.710 "traddr": "10.0.0.2", 00:17:40.710 "trsvcid": "4420", 00:17:40.710 "trtype": "TCP" 00:17:40.710 }, 00:17:40.710 "peer_address": { 00:17:40.710 "adrfam": "IPv4", 00:17:40.710 "traddr": "10.0.0.1", 00:17:40.710 "trsvcid": "46822", 00:17:40.710 "trtype": "TCP" 00:17:40.710 }, 00:17:40.710 "qid": 0, 00:17:40.710 "state": "enabled" 00:17:40.710 } 00:17:40.710 ]' 00:17:40.710 13:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.710 13:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.710 13:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.710 13:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:40.710 13:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.710 13:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.710 13:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.710 13:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.275 13:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:03:ZWFkNmJmOGMyMjA4ZDI5NGJlZDFhYjg2ZjE4MmYyMGU1NmM3YzQ5NTkxMTJhOTk0MzVkZDU1N2Y5NGJiNjRmNd9JGNU=: 00:17:41.841 13:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.841 13:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:41.841 13:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.841 13:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.841 13:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.841 13:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.841 13:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.841 13:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:41.841 13:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:42.099 13:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:42.099 13:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.099 13:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:42.099 13:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:42.099 13:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:42.099 13:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.099 13:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.099 13:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.099 13:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.099 13:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.099 13:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.099 13:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.356 00:17:42.614 13:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.614 13:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.614 13:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.872 13:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.872 13:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.872 13:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.872 13:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.872 13:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.872 13:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.872 { 00:17:42.872 "auth": { 00:17:42.872 "dhgroup": "ffdhe3072", 00:17:42.872 "digest": "sha384", 00:17:42.872 "state": "completed" 00:17:42.872 }, 00:17:42.872 "cntlid": 65, 00:17:42.872 "listen_address": { 00:17:42.872 "adrfam": "IPv4", 00:17:42.872 "traddr": "10.0.0.2", 00:17:42.872 "trsvcid": "4420", 00:17:42.872 "trtype": "TCP" 00:17:42.872 }, 00:17:42.872 "peer_address": { 00:17:42.872 "adrfam": "IPv4", 00:17:42.872 "traddr": "10.0.0.1", 00:17:42.872 "trsvcid": "36460", 00:17:42.872 "trtype": "TCP" 00:17:42.872 }, 00:17:42.872 "qid": 0, 00:17:42.872 "state": "enabled" 00:17:42.872 } 00:17:42.872 ]' 00:17:42.872 13:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.872 13:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.872 13:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.872 13:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:42.872 13:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.872 13:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.872 13:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.872 13:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.133 13:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:00:N2Y1YzYyNDk4OThiNWI3YTJlYTRlNzNkZTJjNjFmNmY0MjE5OTU4NmYzMDBhZmUyIkAOyQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiY2YyMzRkNDNhMTc4YTM5ZmMxZDQ0YTg3ODY3NDRjYTdhMmQ0MWU5NDAyZjcyN2E2N2JjN2VlYzJiMTUwNLNbInk=: 00:17:43.782 13:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.040 13:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:44.040 13:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.040 13:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.040 13:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.040 13:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.040 13:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:44.040 13:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:44.298 13:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:44.298 13:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.298 13:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:44.298 13:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:44.298 13:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:44.298 13:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.298 13:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.298 13:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.298 13:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.298 13:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.298 13:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.298 13:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.556 00:17:44.556 13:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.556 13:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.556 13:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.816 13:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.816 13:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.816 13:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.816 13:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.816 13:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.816 13:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.816 { 00:17:44.816 "auth": { 00:17:44.816 "dhgroup": "ffdhe3072", 00:17:44.816 "digest": "sha384", 00:17:44.816 "state": "completed" 00:17:44.816 }, 00:17:44.816 "cntlid": 67, 00:17:44.816 "listen_address": { 00:17:44.816 "adrfam": "IPv4", 00:17:44.816 "traddr": "10.0.0.2", 00:17:44.816 "trsvcid": "4420", 00:17:44.816 "trtype": "TCP" 00:17:44.816 }, 00:17:44.816 "peer_address": { 00:17:44.816 "adrfam": "IPv4", 00:17:44.816 "traddr": "10.0.0.1", 00:17:44.816 "trsvcid": "36484", 00:17:44.816 "trtype": "TCP" 00:17:44.816 }, 00:17:44.816 "qid": 0, 00:17:44.816 "state": "enabled" 00:17:44.816 } 00:17:44.816 ]' 00:17:44.816 13:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.816 13:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.816 13:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.816 13:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:44.816 13:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.075 13:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.075 13:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.075 13:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.333 13:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:01:YTYyNDU1MmNlNDc1MThjMjYyMGZlNzhhNzdjYjY5NmGb4/uq: --dhchap-ctrl-secret DHHC-1:02:ZjUwNWMyYjMyMGIwNDI3MmRkNDM2YzIwNzU4ZGMzYWY5ZmUxZmY2NzdiZjM0YjA5rEeJRw==: 00:17:46.269 13:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.269 13:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:46.269 13:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.269 13:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.269 13:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.269 13:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.269 13:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:46.269 13:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:46.528 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:46.528 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.528 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:46.528 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:46.528 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:46.528 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.528 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.528 13:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.528 13:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.528 13:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.528 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.528 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.785 00:17:46.785 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.785 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.785 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.043 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.043 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.043 13:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.043 13:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.043 13:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.043 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.043 { 00:17:47.043 "auth": { 00:17:47.043 "dhgroup": "ffdhe3072", 00:17:47.043 "digest": "sha384", 00:17:47.043 "state": "completed" 00:17:47.043 }, 00:17:47.043 "cntlid": 69, 00:17:47.043 "listen_address": { 00:17:47.043 "adrfam": "IPv4", 00:17:47.043 "traddr": "10.0.0.2", 00:17:47.043 "trsvcid": "4420", 00:17:47.043 "trtype": "TCP" 00:17:47.043 }, 00:17:47.043 "peer_address": { 00:17:47.043 "adrfam": "IPv4", 00:17:47.043 "traddr": "10.0.0.1", 00:17:47.043 "trsvcid": "36518", 00:17:47.043 "trtype": "TCP" 00:17:47.043 }, 00:17:47.043 "qid": 0, 00:17:47.043 "state": "enabled" 00:17:47.043 } 00:17:47.043 ]' 00:17:47.043 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.301 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.301 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.301 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:47.301 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.301 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.301 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.301 13:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.559 13:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:02:OTg3OGFlYzU1NGIxYjI2OTRlODk0M2YxZjFmZGY1ZmIxMmJkMDdiM2UxOWZjNDRmzm1SbQ==: --dhchap-ctrl-secret DHHC-1:01:ZTYxNjU2NjQzYTE1ZmE5NTQ0MWYxZmVjNjU4NjA4OGIaIdv0: 00:17:48.126 13:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.402 13:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:48.402 13:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.402 13:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.402 13:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.402 13:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.402 13:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:48.402 13:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:48.674 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:48.674 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.674 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:48.674 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:48.674 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:48.674 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.674 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key3 00:17:48.674 13:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.674 13:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.674 13:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.674 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.674 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.955 00:17:48.955 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.955 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.955 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.213 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.213 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.213 13:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.213 13:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.213 13:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.213 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.213 { 00:17:49.213 "auth": { 00:17:49.213 "dhgroup": "ffdhe3072", 00:17:49.213 "digest": "sha384", 00:17:49.213 "state": "completed" 00:17:49.213 }, 00:17:49.213 "cntlid": 71, 00:17:49.213 "listen_address": { 00:17:49.213 "adrfam": "IPv4", 00:17:49.213 "traddr": "10.0.0.2", 00:17:49.213 "trsvcid": "4420", 00:17:49.213 "trtype": "TCP" 00:17:49.213 }, 00:17:49.213 "peer_address": { 00:17:49.213 "adrfam": "IPv4", 00:17:49.213 "traddr": "10.0.0.1", 00:17:49.213 "trsvcid": "36542", 00:17:49.213 "trtype": "TCP" 00:17:49.213 }, 00:17:49.213 "qid": 0, 00:17:49.213 "state": "enabled" 00:17:49.213 } 00:17:49.213 ]' 00:17:49.213 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.213 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.213 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.213 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:49.213 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.213 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.213 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.213 13:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.778 13:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:03:ZWFkNmJmOGMyMjA4ZDI5NGJlZDFhYjg2ZjE4MmYyMGU1NmM3YzQ5NTkxMTJhOTk0MzVkZDU1N2Y5NGJiNjRmNd9JGNU=: 00:17:50.343 13:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.343 13:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:50.343 13:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.343 13:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.343 13:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.343 13:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.343 13:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.343 13:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:50.343 13:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:50.601 13:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:50.601 13:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.601 13:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:50.601 13:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:50.601 13:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:50.601 13:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.601 13:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.601 13:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.601 13:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.601 13:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.601 13:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.601 13:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.858 00:17:50.858 13:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.858 13:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.858 13:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.424 13:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.424 13:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.424 13:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.424 13:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.424 13:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.424 13:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.424 { 00:17:51.424 "auth": { 00:17:51.424 "dhgroup": "ffdhe4096", 00:17:51.424 "digest": "sha384", 00:17:51.424 "state": "completed" 00:17:51.424 }, 00:17:51.424 "cntlid": 73, 00:17:51.424 "listen_address": { 00:17:51.424 "adrfam": "IPv4", 00:17:51.424 "traddr": "10.0.0.2", 00:17:51.424 "trsvcid": "4420", 00:17:51.424 "trtype": "TCP" 00:17:51.424 }, 00:17:51.424 "peer_address": { 00:17:51.424 "adrfam": "IPv4", 00:17:51.424 "traddr": "10.0.0.1", 00:17:51.424 "trsvcid": "36918", 00:17:51.424 "trtype": "TCP" 00:17:51.424 }, 00:17:51.424 "qid": 0, 00:17:51.424 "state": "enabled" 00:17:51.424 } 00:17:51.424 ]' 00:17:51.424 13:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.424 13:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.424 13:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.424 13:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:51.424 13:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.424 13:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.424 13:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.424 13:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.682 13:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:00:N2Y1YzYyNDk4OThiNWI3YTJlYTRlNzNkZTJjNjFmNmY0MjE5OTU4NmYzMDBhZmUyIkAOyQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiY2YyMzRkNDNhMTc4YTM5ZmMxZDQ0YTg3ODY3NDRjYTdhMmQ0MWU5NDAyZjcyN2E2N2JjN2VlYzJiMTUwNLNbInk=: 00:17:52.615 13:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.615 13:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:52.615 13:16:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.615 13:16:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.615 13:16:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.615 13:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.615 13:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:52.615 13:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:52.615 13:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:52.615 13:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.615 13:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:52.615 13:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:52.615 13:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:52.615 13:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.615 13:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.615 13:16:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.615 13:16:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.873 13:16:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.873 13:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.873 13:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.132 00:17:53.132 13:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.132 13:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.132 13:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.391 13:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.391 13:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.391 13:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.391 13:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.391 13:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.391 13:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.391 { 00:17:53.391 "auth": { 00:17:53.391 "dhgroup": "ffdhe4096", 00:17:53.391 "digest": "sha384", 00:17:53.391 "state": "completed" 00:17:53.391 }, 00:17:53.391 "cntlid": 75, 00:17:53.391 "listen_address": { 00:17:53.391 "adrfam": "IPv4", 00:17:53.391 "traddr": "10.0.0.2", 00:17:53.391 "trsvcid": "4420", 00:17:53.391 "trtype": "TCP" 00:17:53.391 }, 00:17:53.391 "peer_address": { 00:17:53.391 "adrfam": "IPv4", 00:17:53.391 "traddr": "10.0.0.1", 00:17:53.391 "trsvcid": "36938", 00:17:53.391 "trtype": "TCP" 00:17:53.391 }, 00:17:53.391 "qid": 0, 00:17:53.391 "state": "enabled" 00:17:53.391 } 00:17:53.391 ]' 00:17:53.391 13:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.391 13:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:53.392 13:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.649 13:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:53.649 13:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.649 13:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.649 13:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.649 13:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.907 13:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:01:YTYyNDU1MmNlNDc1MThjMjYyMGZlNzhhNzdjYjY5NmGb4/uq: --dhchap-ctrl-secret DHHC-1:02:ZjUwNWMyYjMyMGIwNDI3MmRkNDM2YzIwNzU4ZGMzYWY5ZmUxZmY2NzdiZjM0YjA5rEeJRw==: 00:17:54.473 13:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.473 13:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:54.473 13:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.473 13:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.473 13:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.473 13:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.473 13:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:54.473 13:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:54.731 13:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:54.731 13:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.731 13:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:54.731 13:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:54.731 13:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:54.731 13:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.731 13:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.731 13:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.731 13:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.731 13:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.731 13:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.731 13:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.296 00:17:55.296 13:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.296 13:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.296 13:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.554 13:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.554 13:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.554 13:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.554 13:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.554 13:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.554 13:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.554 { 00:17:55.554 "auth": { 00:17:55.554 "dhgroup": "ffdhe4096", 00:17:55.554 "digest": "sha384", 00:17:55.554 "state": "completed" 00:17:55.554 }, 00:17:55.554 "cntlid": 77, 00:17:55.554 "listen_address": { 00:17:55.554 "adrfam": "IPv4", 00:17:55.554 "traddr": "10.0.0.2", 00:17:55.554 "trsvcid": "4420", 00:17:55.554 "trtype": "TCP" 00:17:55.554 }, 00:17:55.554 "peer_address": { 00:17:55.554 "adrfam": "IPv4", 00:17:55.554 "traddr": "10.0.0.1", 00:17:55.554 "trsvcid": "36958", 00:17:55.554 "trtype": "TCP" 00:17:55.554 }, 00:17:55.554 "qid": 0, 00:17:55.554 "state": "enabled" 00:17:55.554 } 00:17:55.554 ]' 00:17:55.554 13:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.554 13:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:55.554 13:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.554 13:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:55.554 13:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.554 13:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.554 13:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.555 13:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.119 13:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:02:OTg3OGFlYzU1NGIxYjI2OTRlODk0M2YxZjFmZGY1ZmIxMmJkMDdiM2UxOWZjNDRmzm1SbQ==: --dhchap-ctrl-secret DHHC-1:01:ZTYxNjU2NjQzYTE1ZmE5NTQ0MWYxZmVjNjU4NjA4OGIaIdv0: 00:17:56.685 13:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.685 13:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:56.685 13:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.685 13:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.685 13:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.685 13:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.685 13:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:56.685 13:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:56.942 13:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:56.942 13:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.942 13:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:56.942 13:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:56.942 13:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:56.942 13:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.943 13:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key3 00:17:56.943 13:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.943 13:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.943 13:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.943 13:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.943 13:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.525 00:17:57.525 13:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.525 13:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.525 13:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.783 13:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.783 13:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.783 13:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.783 13:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.783 13:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.783 13:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.783 { 00:17:57.783 "auth": { 00:17:57.783 "dhgroup": "ffdhe4096", 00:17:57.783 "digest": "sha384", 00:17:57.783 "state": "completed" 00:17:57.783 }, 00:17:57.783 "cntlid": 79, 00:17:57.783 "listen_address": { 00:17:57.783 "adrfam": "IPv4", 00:17:57.783 "traddr": "10.0.0.2", 00:17:57.783 "trsvcid": "4420", 00:17:57.783 "trtype": "TCP" 00:17:57.783 }, 00:17:57.783 "peer_address": { 00:17:57.783 "adrfam": "IPv4", 00:17:57.783 "traddr": "10.0.0.1", 00:17:57.783 "trsvcid": "36996", 00:17:57.783 "trtype": "TCP" 00:17:57.783 }, 00:17:57.783 "qid": 0, 00:17:57.783 "state": "enabled" 00:17:57.783 } 00:17:57.783 ]' 00:17:57.783 13:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.783 13:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.783 13:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.783 13:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:57.783 13:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.783 13:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.783 13:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.783 13:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.041 13:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:03:ZWFkNmJmOGMyMjA4ZDI5NGJlZDFhYjg2ZjE4MmYyMGU1NmM3YzQ5NTkxMTJhOTk0MzVkZDU1N2Y5NGJiNjRmNd9JGNU=: 00:17:58.606 13:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.606 13:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:17:58.606 13:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.606 13:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.606 13:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.606 13:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.606 13:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.606 13:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:58.606 13:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:59.170 13:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:59.170 13:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.170 13:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:59.170 13:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:59.170 13:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:59.170 13:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.170 13:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.170 13:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.170 13:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.170 13:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.171 13:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.171 13:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.428 00:17:59.428 13:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.428 13:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.685 13:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.685 13:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.685 13:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.685 13:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.685 13:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.685 13:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.942 13:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.942 { 00:17:59.942 "auth": { 00:17:59.942 "dhgroup": "ffdhe6144", 00:17:59.942 "digest": "sha384", 00:17:59.942 "state": "completed" 00:17:59.942 }, 00:17:59.942 "cntlid": 81, 00:17:59.942 "listen_address": { 00:17:59.942 "adrfam": "IPv4", 00:17:59.942 "traddr": "10.0.0.2", 00:17:59.942 "trsvcid": "4420", 00:17:59.942 "trtype": "TCP" 00:17:59.942 }, 00:17:59.942 "peer_address": { 00:17:59.942 "adrfam": "IPv4", 00:17:59.942 "traddr": "10.0.0.1", 00:17:59.942 "trsvcid": "37014", 00:17:59.942 "trtype": "TCP" 00:17:59.942 }, 00:17:59.942 "qid": 0, 00:17:59.942 "state": "enabled" 00:17:59.942 } 00:17:59.942 ]' 00:17:59.942 13:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.942 13:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.942 13:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.942 13:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:59.942 13:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.942 13:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.942 13:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.942 13:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.199 13:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:00:N2Y1YzYyNDk4OThiNWI3YTJlYTRlNzNkZTJjNjFmNmY0MjE5OTU4NmYzMDBhZmUyIkAOyQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiY2YyMzRkNDNhMTc4YTM5ZmMxZDQ0YTg3ODY3NDRjYTdhMmQ0MWU5NDAyZjcyN2E2N2JjN2VlYzJiMTUwNLNbInk=: 00:18:01.132 13:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.132 13:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:01.132 13:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.132 13:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.132 13:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.132 13:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.132 13:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:01.132 13:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:01.389 13:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:01.389 13:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.389 13:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:01.389 13:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:01.389 13:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:01.389 13:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.389 13:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.389 13:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.389 13:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.389 13:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.389 13:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.390 13:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.647 00:18:01.647 13:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.647 13:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.647 13:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.213 13:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.213 13:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.213 13:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.213 13:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.213 13:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.213 13:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.213 { 00:18:02.213 "auth": { 00:18:02.213 "dhgroup": "ffdhe6144", 00:18:02.213 "digest": "sha384", 00:18:02.213 "state": "completed" 00:18:02.213 }, 00:18:02.213 "cntlid": 83, 00:18:02.213 "listen_address": { 00:18:02.213 "adrfam": "IPv4", 00:18:02.213 "traddr": "10.0.0.2", 00:18:02.213 "trsvcid": "4420", 00:18:02.213 "trtype": "TCP" 00:18:02.213 }, 00:18:02.213 "peer_address": { 00:18:02.213 "adrfam": "IPv4", 00:18:02.213 "traddr": "10.0.0.1", 00:18:02.213 "trsvcid": "47376", 00:18:02.213 "trtype": "TCP" 00:18:02.213 }, 00:18:02.213 "qid": 0, 00:18:02.213 "state": "enabled" 00:18:02.213 } 00:18:02.213 ]' 00:18:02.213 13:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.213 13:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.213 13:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.213 13:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:02.213 13:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.213 13:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.213 13:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.213 13:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.471 13:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:01:YTYyNDU1MmNlNDc1MThjMjYyMGZlNzhhNzdjYjY5NmGb4/uq: --dhchap-ctrl-secret DHHC-1:02:ZjUwNWMyYjMyMGIwNDI3MmRkNDM2YzIwNzU4ZGMzYWY5ZmUxZmY2NzdiZjM0YjA5rEeJRw==: 00:18:03.405 13:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.405 13:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:03.405 13:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.405 13:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.405 13:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.405 13:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.405 13:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:03.405 13:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:03.662 13:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:03.662 13:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.662 13:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:03.662 13:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:03.663 13:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:03.663 13:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.663 13:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.663 13:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.663 13:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.663 13:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.663 13:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.663 13:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.227 00:18:04.227 13:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.227 13:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.227 13:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.485 13:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.485 13:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.485 13:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.485 13:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.485 13:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.485 13:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.485 { 00:18:04.485 "auth": { 00:18:04.485 "dhgroup": "ffdhe6144", 00:18:04.485 "digest": "sha384", 00:18:04.485 "state": "completed" 00:18:04.485 }, 00:18:04.485 "cntlid": 85, 00:18:04.485 "listen_address": { 00:18:04.485 "adrfam": "IPv4", 00:18:04.485 "traddr": "10.0.0.2", 00:18:04.485 "trsvcid": "4420", 00:18:04.485 "trtype": "TCP" 00:18:04.485 }, 00:18:04.485 "peer_address": { 00:18:04.485 "adrfam": "IPv4", 00:18:04.485 "traddr": "10.0.0.1", 00:18:04.485 "trsvcid": "47398", 00:18:04.485 "trtype": "TCP" 00:18:04.485 }, 00:18:04.485 "qid": 0, 00:18:04.485 "state": "enabled" 00:18:04.485 } 00:18:04.485 ]' 00:18:04.485 13:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.485 13:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.485 13:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.485 13:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:04.485 13:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.485 13:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.485 13:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.485 13:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.742 13:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:02:OTg3OGFlYzU1NGIxYjI2OTRlODk0M2YxZjFmZGY1ZmIxMmJkMDdiM2UxOWZjNDRmzm1SbQ==: --dhchap-ctrl-secret DHHC-1:01:ZTYxNjU2NjQzYTE1ZmE5NTQ0MWYxZmVjNjU4NjA4OGIaIdv0: 00:18:05.674 13:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.674 13:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:05.674 13:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.674 13:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.674 13:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.674 13:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.674 13:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:05.674 13:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:05.932 13:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:05.932 13:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.932 13:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:05.932 13:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:05.932 13:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:05.932 13:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.932 13:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key3 00:18:05.932 13:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.932 13:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.932 13:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.932 13:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.932 13:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.189 00:18:06.189 13:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.189 13:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.189 13:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.753 13:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.753 13:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.754 13:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.754 13:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.754 13:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.754 13:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.754 { 00:18:06.754 "auth": { 00:18:06.754 "dhgroup": "ffdhe6144", 00:18:06.754 "digest": "sha384", 00:18:06.754 "state": "completed" 00:18:06.754 }, 00:18:06.754 "cntlid": 87, 00:18:06.754 "listen_address": { 00:18:06.754 "adrfam": "IPv4", 00:18:06.754 "traddr": "10.0.0.2", 00:18:06.754 "trsvcid": "4420", 00:18:06.754 "trtype": "TCP" 00:18:06.754 }, 00:18:06.754 "peer_address": { 00:18:06.754 "adrfam": "IPv4", 00:18:06.754 "traddr": "10.0.0.1", 00:18:06.754 "trsvcid": "47430", 00:18:06.754 "trtype": "TCP" 00:18:06.754 }, 00:18:06.754 "qid": 0, 00:18:06.754 "state": "enabled" 00:18:06.754 } 00:18:06.754 ]' 00:18:06.754 13:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.754 13:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.754 13:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.754 13:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.754 13:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.754 13:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.754 13:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.754 13:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.011 13:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:03:ZWFkNmJmOGMyMjA4ZDI5NGJlZDFhYjg2ZjE4MmYyMGU1NmM3YzQ5NTkxMTJhOTk0MzVkZDU1N2Y5NGJiNjRmNd9JGNU=: 00:18:07.942 13:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.942 13:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:07.942 13:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.942 13:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.942 13:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.942 13:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.942 13:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.942 13:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:07.943 13:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:08.200 13:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:08.200 13:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.200 13:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:08.200 13:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:08.200 13:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:08.200 13:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.200 13:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.200 13:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.200 13:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.200 13:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.200 13:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.200 13:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.765 00:18:08.765 13:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.765 13:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.765 13:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.329 13:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.329 13:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.329 13:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.329 13:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.329 13:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.329 13:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.329 { 00:18:09.329 "auth": { 00:18:09.329 "dhgroup": "ffdhe8192", 00:18:09.329 "digest": "sha384", 00:18:09.329 "state": "completed" 00:18:09.329 }, 00:18:09.329 "cntlid": 89, 00:18:09.329 "listen_address": { 00:18:09.329 "adrfam": "IPv4", 00:18:09.329 "traddr": "10.0.0.2", 00:18:09.329 "trsvcid": "4420", 00:18:09.329 "trtype": "TCP" 00:18:09.329 }, 00:18:09.329 "peer_address": { 00:18:09.329 "adrfam": "IPv4", 00:18:09.329 "traddr": "10.0.0.1", 00:18:09.329 "trsvcid": "47468", 00:18:09.329 "trtype": "TCP" 00:18:09.329 }, 00:18:09.329 "qid": 0, 00:18:09.329 "state": "enabled" 00:18:09.329 } 00:18:09.329 ]' 00:18:09.329 13:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.329 13:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.329 13:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.329 13:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.329 13:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.329 13:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.329 13:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.329 13:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.587 13:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:00:N2Y1YzYyNDk4OThiNWI3YTJlYTRlNzNkZTJjNjFmNmY0MjE5OTU4NmYzMDBhZmUyIkAOyQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiY2YyMzRkNDNhMTc4YTM5ZmMxZDQ0YTg3ODY3NDRjYTdhMmQ0MWU5NDAyZjcyN2E2N2JjN2VlYzJiMTUwNLNbInk=: 00:18:10.151 13:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.151 13:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:10.151 13:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.151 13:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.151 13:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.151 13:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.151 13:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:10.151 13:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:10.408 13:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:10.409 13:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.409 13:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:10.409 13:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:10.409 13:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:10.409 13:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.409 13:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.409 13:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.409 13:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.409 13:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.409 13:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.409 13:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.372 00:18:11.372 13:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.372 13:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.372 13:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.669 13:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.669 13:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.669 13:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.669 13:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.669 13:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.669 13:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.669 { 00:18:11.669 "auth": { 00:18:11.669 "dhgroup": "ffdhe8192", 00:18:11.669 "digest": "sha384", 00:18:11.669 "state": "completed" 00:18:11.669 }, 00:18:11.670 "cntlid": 91, 00:18:11.670 "listen_address": { 00:18:11.670 "adrfam": "IPv4", 00:18:11.670 "traddr": "10.0.0.2", 00:18:11.670 "trsvcid": "4420", 00:18:11.670 "trtype": "TCP" 00:18:11.670 }, 00:18:11.670 "peer_address": { 00:18:11.670 "adrfam": "IPv4", 00:18:11.670 "traddr": "10.0.0.1", 00:18:11.670 "trsvcid": "39108", 00:18:11.670 "trtype": "TCP" 00:18:11.670 }, 00:18:11.670 "qid": 0, 00:18:11.670 "state": "enabled" 00:18:11.670 } 00:18:11.670 ]' 00:18:11.670 13:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.670 13:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.670 13:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.670 13:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.670 13:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.670 13:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.670 13:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.670 13:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.927 13:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:01:YTYyNDU1MmNlNDc1MThjMjYyMGZlNzhhNzdjYjY5NmGb4/uq: --dhchap-ctrl-secret DHHC-1:02:ZjUwNWMyYjMyMGIwNDI3MmRkNDM2YzIwNzU4ZGMzYWY5ZmUxZmY2NzdiZjM0YjA5rEeJRw==: 00:18:12.861 13:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.861 13:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:12.861 13:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.861 13:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.861 13:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.861 13:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.861 13:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:12.861 13:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:12.861 13:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:12.861 13:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.861 13:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:12.861 13:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:12.861 13:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:12.861 13:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.861 13:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.861 13:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.861 13:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.861 13:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.861 13:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.861 13:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.793 00:18:13.793 13:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.793 13:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.793 13:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.793 13:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.793 13:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.793 13:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.793 13:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.050 13:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.050 13:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.050 { 00:18:14.050 "auth": { 00:18:14.050 "dhgroup": "ffdhe8192", 00:18:14.050 "digest": "sha384", 00:18:14.050 "state": "completed" 00:18:14.050 }, 00:18:14.050 "cntlid": 93, 00:18:14.050 "listen_address": { 00:18:14.050 "adrfam": "IPv4", 00:18:14.050 "traddr": "10.0.0.2", 00:18:14.050 "trsvcid": "4420", 00:18:14.050 "trtype": "TCP" 00:18:14.050 }, 00:18:14.050 "peer_address": { 00:18:14.050 "adrfam": "IPv4", 00:18:14.050 "traddr": "10.0.0.1", 00:18:14.050 "trsvcid": "39144", 00:18:14.050 "trtype": "TCP" 00:18:14.050 }, 00:18:14.050 "qid": 0, 00:18:14.050 "state": "enabled" 00:18:14.050 } 00:18:14.050 ]' 00:18:14.050 13:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.050 13:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.050 13:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.050 13:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.050 13:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.050 13:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.050 13:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.050 13:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.311 13:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:02:OTg3OGFlYzU1NGIxYjI2OTRlODk0M2YxZjFmZGY1ZmIxMmJkMDdiM2UxOWZjNDRmzm1SbQ==: --dhchap-ctrl-secret DHHC-1:01:ZTYxNjU2NjQzYTE1ZmE5NTQ0MWYxZmVjNjU4NjA4OGIaIdv0: 00:18:15.243 13:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.243 13:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:15.243 13:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.243 13:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.243 13:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.243 13:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.243 13:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:15.243 13:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:15.502 13:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:15.502 13:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.502 13:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:15.502 13:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:15.502 13:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:15.502 13:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.502 13:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key3 00:18:15.502 13:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.502 13:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.502 13:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.502 13:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:15.502 13:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:16.093 00:18:16.093 13:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.093 13:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.093 13:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.351 13:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.351 13:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.351 13:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.351 13:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.351 13:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.351 13:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.351 { 00:18:16.351 "auth": { 00:18:16.351 "dhgroup": "ffdhe8192", 00:18:16.351 "digest": "sha384", 00:18:16.351 "state": "completed" 00:18:16.351 }, 00:18:16.351 "cntlid": 95, 00:18:16.351 "listen_address": { 00:18:16.351 "adrfam": "IPv4", 00:18:16.351 "traddr": "10.0.0.2", 00:18:16.351 "trsvcid": "4420", 00:18:16.351 "trtype": "TCP" 00:18:16.351 }, 00:18:16.351 "peer_address": { 00:18:16.351 "adrfam": "IPv4", 00:18:16.351 "traddr": "10.0.0.1", 00:18:16.351 "trsvcid": "39182", 00:18:16.351 "trtype": "TCP" 00:18:16.351 }, 00:18:16.351 "qid": 0, 00:18:16.351 "state": "enabled" 00:18:16.351 } 00:18:16.351 ]' 00:18:16.351 13:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.351 13:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.351 13:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.351 13:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:16.351 13:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.609 13:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.609 13:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.609 13:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.866 13:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:03:ZWFkNmJmOGMyMjA4ZDI5NGJlZDFhYjg2ZjE4MmYyMGU1NmM3YzQ5NTkxMTJhOTk0MzVkZDU1N2Y5NGJiNjRmNd9JGNU=: 00:18:17.431 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.431 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:17.431 13:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.431 13:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.431 13:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.431 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:17.431 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:17.431 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.431 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:17.431 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:17.688 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:17.688 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.688 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:17.688 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:17.688 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:17.688 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.688 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.688 13:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.688 13:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.688 13:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.688 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.688 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.946 00:18:17.946 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.946 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.946 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.204 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.204 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.204 13:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.204 13:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.204 13:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.204 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.204 { 00:18:18.204 "auth": { 00:18:18.204 "dhgroup": "null", 00:18:18.204 "digest": "sha512", 00:18:18.204 "state": "completed" 00:18:18.204 }, 00:18:18.204 "cntlid": 97, 00:18:18.204 "listen_address": { 00:18:18.204 "adrfam": "IPv4", 00:18:18.204 "traddr": "10.0.0.2", 00:18:18.204 "trsvcid": "4420", 00:18:18.204 "trtype": "TCP" 00:18:18.204 }, 00:18:18.204 "peer_address": { 00:18:18.204 "adrfam": "IPv4", 00:18:18.204 "traddr": "10.0.0.1", 00:18:18.204 "trsvcid": "39200", 00:18:18.204 "trtype": "TCP" 00:18:18.204 }, 00:18:18.204 "qid": 0, 00:18:18.204 "state": "enabled" 00:18:18.204 } 00:18:18.204 ]' 00:18:18.204 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.462 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.462 13:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.462 13:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:18.462 13:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.462 13:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.462 13:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.462 13:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.720 13:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:00:N2Y1YzYyNDk4OThiNWI3YTJlYTRlNzNkZTJjNjFmNmY0MjE5OTU4NmYzMDBhZmUyIkAOyQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiY2YyMzRkNDNhMTc4YTM5ZmMxZDQ0YTg3ODY3NDRjYTdhMmQ0MWU5NDAyZjcyN2E2N2JjN2VlYzJiMTUwNLNbInk=: 00:18:19.653 13:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.653 13:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:19.653 13:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.653 13:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.653 13:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.653 13:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.653 13:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:19.653 13:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:19.653 13:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:19.653 13:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.653 13:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:19.653 13:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:19.653 13:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:19.653 13:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.653 13:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.653 13:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.653 13:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.911 13:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.911 13:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.911 13:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.167 00:18:20.167 13:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.167 13:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.167 13:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.425 13:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.425 13:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.425 13:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.425 13:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.425 13:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.425 13:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.425 { 00:18:20.425 "auth": { 00:18:20.425 "dhgroup": "null", 00:18:20.425 "digest": "sha512", 00:18:20.425 "state": "completed" 00:18:20.426 }, 00:18:20.426 "cntlid": 99, 00:18:20.426 "listen_address": { 00:18:20.426 "adrfam": "IPv4", 00:18:20.426 "traddr": "10.0.0.2", 00:18:20.426 "trsvcid": "4420", 00:18:20.426 "trtype": "TCP" 00:18:20.426 }, 00:18:20.426 "peer_address": { 00:18:20.426 "adrfam": "IPv4", 00:18:20.426 "traddr": "10.0.0.1", 00:18:20.426 "trsvcid": "39228", 00:18:20.426 "trtype": "TCP" 00:18:20.426 }, 00:18:20.426 "qid": 0, 00:18:20.426 "state": "enabled" 00:18:20.426 } 00:18:20.426 ]' 00:18:20.426 13:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.426 13:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.426 13:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.426 13:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:20.426 13:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.704 13:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.704 13:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.704 13:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.962 13:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:01:YTYyNDU1MmNlNDc1MThjMjYyMGZlNzhhNzdjYjY5NmGb4/uq: --dhchap-ctrl-secret DHHC-1:02:ZjUwNWMyYjMyMGIwNDI3MmRkNDM2YzIwNzU4ZGMzYWY5ZmUxZmY2NzdiZjM0YjA5rEeJRw==: 00:18:21.526 13:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.526 13:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:21.526 13:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.526 13:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.526 13:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.526 13:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.526 13:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:21.526 13:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:21.784 13:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:21.784 13:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.784 13:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:21.784 13:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:21.784 13:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:21.784 13:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.784 13:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.784 13:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.784 13:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.784 13:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.784 13:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.784 13:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.348 00:18:22.348 13:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.348 13:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.348 13:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.605 13:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.605 13:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.605 13:17:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.605 13:17:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.605 13:17:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.605 13:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.605 { 00:18:22.605 "auth": { 00:18:22.605 "dhgroup": "null", 00:18:22.605 "digest": "sha512", 00:18:22.605 "state": "completed" 00:18:22.605 }, 00:18:22.605 "cntlid": 101, 00:18:22.605 "listen_address": { 00:18:22.605 "adrfam": "IPv4", 00:18:22.605 "traddr": "10.0.0.2", 00:18:22.605 "trsvcid": "4420", 00:18:22.605 "trtype": "TCP" 00:18:22.605 }, 00:18:22.605 "peer_address": { 00:18:22.605 "adrfam": "IPv4", 00:18:22.605 "traddr": "10.0.0.1", 00:18:22.605 "trsvcid": "60132", 00:18:22.605 "trtype": "TCP" 00:18:22.605 }, 00:18:22.605 "qid": 0, 00:18:22.605 "state": "enabled" 00:18:22.605 } 00:18:22.605 ]' 00:18:22.605 13:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.605 13:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.605 13:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.605 13:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:22.605 13:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.605 13:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.605 13:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.605 13:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.862 13:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:02:OTg3OGFlYzU1NGIxYjI2OTRlODk0M2YxZjFmZGY1ZmIxMmJkMDdiM2UxOWZjNDRmzm1SbQ==: --dhchap-ctrl-secret DHHC-1:01:ZTYxNjU2NjQzYTE1ZmE5NTQ0MWYxZmVjNjU4NjA4OGIaIdv0: 00:18:23.428 13:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.428 13:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:23.428 13:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.428 13:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.428 13:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.428 13:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.428 13:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:23.428 13:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:23.685 13:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:23.686 13:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.686 13:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:23.686 13:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:23.686 13:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:23.686 13:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.686 13:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key3 00:18:23.686 13:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.686 13:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.686 13:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.686 13:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.686 13:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.943 00:18:24.201 13:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.201 13:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.201 13:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.459 13:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.459 13:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.459 13:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.459 13:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.459 13:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.459 13:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.459 { 00:18:24.459 "auth": { 00:18:24.459 "dhgroup": "null", 00:18:24.459 "digest": "sha512", 00:18:24.459 "state": "completed" 00:18:24.459 }, 00:18:24.459 "cntlid": 103, 00:18:24.459 "listen_address": { 00:18:24.459 "adrfam": "IPv4", 00:18:24.459 "traddr": "10.0.0.2", 00:18:24.459 "trsvcid": "4420", 00:18:24.459 "trtype": "TCP" 00:18:24.459 }, 00:18:24.459 "peer_address": { 00:18:24.459 "adrfam": "IPv4", 00:18:24.459 "traddr": "10.0.0.1", 00:18:24.459 "trsvcid": "60148", 00:18:24.459 "trtype": "TCP" 00:18:24.459 }, 00:18:24.459 "qid": 0, 00:18:24.459 "state": "enabled" 00:18:24.460 } 00:18:24.460 ]' 00:18:24.460 13:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.460 13:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.460 13:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.460 13:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:24.460 13:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.460 13:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.460 13:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.460 13:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.717 13:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:03:ZWFkNmJmOGMyMjA4ZDI5NGJlZDFhYjg2ZjE4MmYyMGU1NmM3YzQ5NTkxMTJhOTk0MzVkZDU1N2Y5NGJiNjRmNd9JGNU=: 00:18:25.281 13:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.281 13:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:25.281 13:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.281 13:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.281 13:17:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.281 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.281 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.281 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:25.281 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:25.541 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:25.541 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.541 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:25.541 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:25.541 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:25.541 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.541 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.541 13:17:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.541 13:17:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.799 13:17:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.799 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.799 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.057 00:18:26.057 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.057 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.057 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.315 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.315 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.315 13:17:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.315 13:17:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.315 13:17:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.315 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.315 { 00:18:26.315 "auth": { 00:18:26.315 "dhgroup": "ffdhe2048", 00:18:26.315 "digest": "sha512", 00:18:26.315 "state": "completed" 00:18:26.315 }, 00:18:26.315 "cntlid": 105, 00:18:26.315 "listen_address": { 00:18:26.315 "adrfam": "IPv4", 00:18:26.315 "traddr": "10.0.0.2", 00:18:26.315 "trsvcid": "4420", 00:18:26.315 "trtype": "TCP" 00:18:26.315 }, 00:18:26.315 "peer_address": { 00:18:26.315 "adrfam": "IPv4", 00:18:26.315 "traddr": "10.0.0.1", 00:18:26.315 "trsvcid": "60184", 00:18:26.315 "trtype": "TCP" 00:18:26.315 }, 00:18:26.315 "qid": 0, 00:18:26.315 "state": "enabled" 00:18:26.315 } 00:18:26.315 ]' 00:18:26.315 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.315 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.315 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.315 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:26.315 13:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.315 13:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.315 13:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.315 13:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.881 13:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:00:N2Y1YzYyNDk4OThiNWI3YTJlYTRlNzNkZTJjNjFmNmY0MjE5OTU4NmYzMDBhZmUyIkAOyQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiY2YyMzRkNDNhMTc4YTM5ZmMxZDQ0YTg3ODY3NDRjYTdhMmQ0MWU5NDAyZjcyN2E2N2JjN2VlYzJiMTUwNLNbInk=: 00:18:27.446 13:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.446 13:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:27.446 13:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.446 13:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.446 13:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.446 13:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.446 13:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:27.446 13:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:27.704 13:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:18:27.704 13:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.704 13:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:27.704 13:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:27.704 13:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:27.704 13:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.704 13:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.704 13:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.704 13:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.704 13:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.704 13:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.704 13:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.962 00:18:27.962 13:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.962 13:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.962 13:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.220 13:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.220 13:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.220 13:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.220 13:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.220 13:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.220 13:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.220 { 00:18:28.220 "auth": { 00:18:28.220 "dhgroup": "ffdhe2048", 00:18:28.220 "digest": "sha512", 00:18:28.220 "state": "completed" 00:18:28.220 }, 00:18:28.220 "cntlid": 107, 00:18:28.220 "listen_address": { 00:18:28.220 "adrfam": "IPv4", 00:18:28.220 "traddr": "10.0.0.2", 00:18:28.220 "trsvcid": "4420", 00:18:28.220 "trtype": "TCP" 00:18:28.220 }, 00:18:28.220 "peer_address": { 00:18:28.220 "adrfam": "IPv4", 00:18:28.220 "traddr": "10.0.0.1", 00:18:28.220 "trsvcid": "60214", 00:18:28.220 "trtype": "TCP" 00:18:28.220 }, 00:18:28.220 "qid": 0, 00:18:28.220 "state": "enabled" 00:18:28.220 } 00:18:28.220 ]' 00:18:28.478 13:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.478 13:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.478 13:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.478 13:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:28.478 13:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.478 13:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.478 13:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.478 13:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.734 13:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:01:YTYyNDU1MmNlNDc1MThjMjYyMGZlNzhhNzdjYjY5NmGb4/uq: --dhchap-ctrl-secret DHHC-1:02:ZjUwNWMyYjMyMGIwNDI3MmRkNDM2YzIwNzU4ZGMzYWY5ZmUxZmY2NzdiZjM0YjA5rEeJRw==: 00:18:29.666 13:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.666 13:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:29.666 13:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.666 13:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.666 13:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.666 13:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.666 13:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:29.666 13:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:29.928 13:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:29.928 13:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.928 13:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:29.928 13:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:29.928 13:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:29.928 13:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.928 13:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.928 13:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.928 13:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.928 13:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.928 13:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.928 13:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.185 00:18:30.185 13:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.185 13:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.185 13:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.443 13:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.443 13:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.443 13:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.443 13:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.443 13:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.443 13:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.443 { 00:18:30.443 "auth": { 00:18:30.443 "dhgroup": "ffdhe2048", 00:18:30.443 "digest": "sha512", 00:18:30.443 "state": "completed" 00:18:30.443 }, 00:18:30.443 "cntlid": 109, 00:18:30.443 "listen_address": { 00:18:30.443 "adrfam": "IPv4", 00:18:30.443 "traddr": "10.0.0.2", 00:18:30.443 "trsvcid": "4420", 00:18:30.443 "trtype": "TCP" 00:18:30.443 }, 00:18:30.443 "peer_address": { 00:18:30.443 "adrfam": "IPv4", 00:18:30.443 "traddr": "10.0.0.1", 00:18:30.443 "trsvcid": "60240", 00:18:30.443 "trtype": "TCP" 00:18:30.443 }, 00:18:30.443 "qid": 0, 00:18:30.443 "state": "enabled" 00:18:30.443 } 00:18:30.443 ]' 00:18:30.443 13:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.443 13:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.443 13:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.443 13:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:30.443 13:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.443 13:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.443 13:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.443 13:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.008 13:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:02:OTg3OGFlYzU1NGIxYjI2OTRlODk0M2YxZjFmZGY1ZmIxMmJkMDdiM2UxOWZjNDRmzm1SbQ==: --dhchap-ctrl-secret DHHC-1:01:ZTYxNjU2NjQzYTE1ZmE5NTQ0MWYxZmVjNjU4NjA4OGIaIdv0: 00:18:31.575 13:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.575 13:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:31.575 13:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.575 13:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.575 13:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.575 13:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.575 13:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:31.575 13:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:31.833 13:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:31.833 13:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.833 13:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:31.833 13:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:31.833 13:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:31.833 13:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.833 13:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key3 00:18:31.833 13:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.833 13:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.833 13:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.833 13:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.833 13:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.091 00:18:32.091 13:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.091 13:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.091 13:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.349 13:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.349 13:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.349 13:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.349 13:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.607 13:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.607 13:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.607 { 00:18:32.607 "auth": { 00:18:32.607 "dhgroup": "ffdhe2048", 00:18:32.607 "digest": "sha512", 00:18:32.607 "state": "completed" 00:18:32.607 }, 00:18:32.607 "cntlid": 111, 00:18:32.607 "listen_address": { 00:18:32.607 "adrfam": "IPv4", 00:18:32.607 "traddr": "10.0.0.2", 00:18:32.607 "trsvcid": "4420", 00:18:32.607 "trtype": "TCP" 00:18:32.607 }, 00:18:32.607 "peer_address": { 00:18:32.607 "adrfam": "IPv4", 00:18:32.607 "traddr": "10.0.0.1", 00:18:32.607 "trsvcid": "38394", 00:18:32.607 "trtype": "TCP" 00:18:32.607 }, 00:18:32.607 "qid": 0, 00:18:32.607 "state": "enabled" 00:18:32.607 } 00:18:32.607 ]' 00:18:32.607 13:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.607 13:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.607 13:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.607 13:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:32.607 13:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.607 13:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.607 13:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.607 13:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.865 13:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:03:ZWFkNmJmOGMyMjA4ZDI5NGJlZDFhYjg2ZjE4MmYyMGU1NmM3YzQ5NTkxMTJhOTk0MzVkZDU1N2Y5NGJiNjRmNd9JGNU=: 00:18:33.447 13:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.447 13:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:33.447 13:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.447 13:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.447 13:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.447 13:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.447 13:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.447 13:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:33.447 13:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:33.706 13:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:33.706 13:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.706 13:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:33.706 13:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:33.706 13:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:33.706 13:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.706 13:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.706 13:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.706 13:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.706 13:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.706 13:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.706 13:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.271 00:18:34.271 13:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.271 13:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.271 13:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.529 13:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.529 13:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.529 13:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.529 13:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.529 13:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.529 13:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.529 { 00:18:34.529 "auth": { 00:18:34.529 "dhgroup": "ffdhe3072", 00:18:34.529 "digest": "sha512", 00:18:34.529 "state": "completed" 00:18:34.529 }, 00:18:34.529 "cntlid": 113, 00:18:34.529 "listen_address": { 00:18:34.529 "adrfam": "IPv4", 00:18:34.529 "traddr": "10.0.0.2", 00:18:34.529 "trsvcid": "4420", 00:18:34.529 "trtype": "TCP" 00:18:34.529 }, 00:18:34.529 "peer_address": { 00:18:34.529 "adrfam": "IPv4", 00:18:34.529 "traddr": "10.0.0.1", 00:18:34.529 "trsvcid": "38416", 00:18:34.529 "trtype": "TCP" 00:18:34.529 }, 00:18:34.529 "qid": 0, 00:18:34.529 "state": "enabled" 00:18:34.529 } 00:18:34.529 ]' 00:18:34.529 13:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.529 13:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.529 13:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.529 13:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:34.529 13:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.529 13:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.529 13:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.529 13:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.786 13:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:00:N2Y1YzYyNDk4OThiNWI3YTJlYTRlNzNkZTJjNjFmNmY0MjE5OTU4NmYzMDBhZmUyIkAOyQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiY2YyMzRkNDNhMTc4YTM5ZmMxZDQ0YTg3ODY3NDRjYTdhMmQ0MWU5NDAyZjcyN2E2N2JjN2VlYzJiMTUwNLNbInk=: 00:18:35.720 13:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.721 13:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:35.721 13:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.721 13:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.721 13:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.721 13:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.721 13:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:35.721 13:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:35.721 13:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:35.721 13:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.721 13:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:35.721 13:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:35.721 13:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:35.721 13:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.721 13:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.721 13:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.721 13:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.721 13:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.721 13:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.721 13:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.978 00:18:36.235 13:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.236 13:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.236 13:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.236 13:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.236 13:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.236 13:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.236 13:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.493 13:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.493 13:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.493 { 00:18:36.493 "auth": { 00:18:36.493 "dhgroup": "ffdhe3072", 00:18:36.493 "digest": "sha512", 00:18:36.493 "state": "completed" 00:18:36.493 }, 00:18:36.493 "cntlid": 115, 00:18:36.493 "listen_address": { 00:18:36.493 "adrfam": "IPv4", 00:18:36.493 "traddr": "10.0.0.2", 00:18:36.493 "trsvcid": "4420", 00:18:36.493 "trtype": "TCP" 00:18:36.493 }, 00:18:36.493 "peer_address": { 00:18:36.493 "adrfam": "IPv4", 00:18:36.493 "traddr": "10.0.0.1", 00:18:36.493 "trsvcid": "38450", 00:18:36.493 "trtype": "TCP" 00:18:36.493 }, 00:18:36.493 "qid": 0, 00:18:36.493 "state": "enabled" 00:18:36.493 } 00:18:36.493 ]' 00:18:36.493 13:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.493 13:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.493 13:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.494 13:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:36.494 13:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.494 13:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.494 13:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.494 13:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.750 13:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:01:YTYyNDU1MmNlNDc1MThjMjYyMGZlNzhhNzdjYjY5NmGb4/uq: --dhchap-ctrl-secret DHHC-1:02:ZjUwNWMyYjMyMGIwNDI3MmRkNDM2YzIwNzU4ZGMzYWY5ZmUxZmY2NzdiZjM0YjA5rEeJRw==: 00:18:37.683 13:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.683 13:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:37.683 13:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.683 13:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.683 13:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.683 13:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.683 13:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:37.683 13:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:37.683 13:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:37.683 13:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.683 13:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:37.683 13:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:37.683 13:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:37.683 13:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.683 13:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.683 13:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.683 13:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.683 13:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.683 13:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.683 13:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.249 00:18:38.249 13:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.249 13:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.249 13:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.507 13:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.507 13:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.507 13:17:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.507 13:17:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.507 13:17:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.507 13:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.507 { 00:18:38.507 "auth": { 00:18:38.507 "dhgroup": "ffdhe3072", 00:18:38.507 "digest": "sha512", 00:18:38.507 "state": "completed" 00:18:38.507 }, 00:18:38.507 "cntlid": 117, 00:18:38.507 "listen_address": { 00:18:38.507 "adrfam": "IPv4", 00:18:38.507 "traddr": "10.0.0.2", 00:18:38.507 "trsvcid": "4420", 00:18:38.507 "trtype": "TCP" 00:18:38.507 }, 00:18:38.507 "peer_address": { 00:18:38.507 "adrfam": "IPv4", 00:18:38.507 "traddr": "10.0.0.1", 00:18:38.507 "trsvcid": "38480", 00:18:38.507 "trtype": "TCP" 00:18:38.507 }, 00:18:38.507 "qid": 0, 00:18:38.507 "state": "enabled" 00:18:38.507 } 00:18:38.507 ]' 00:18:38.507 13:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.507 13:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.507 13:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.507 13:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:38.507 13:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.507 13:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.507 13:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.507 13:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.764 13:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:02:OTg3OGFlYzU1NGIxYjI2OTRlODk0M2YxZjFmZGY1ZmIxMmJkMDdiM2UxOWZjNDRmzm1SbQ==: --dhchap-ctrl-secret DHHC-1:01:ZTYxNjU2NjQzYTE1ZmE5NTQ0MWYxZmVjNjU4NjA4OGIaIdv0: 00:18:39.697 13:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.697 13:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:39.697 13:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.697 13:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.697 13:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.697 13:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.697 13:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:39.697 13:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:39.955 13:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:39.955 13:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.955 13:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:39.955 13:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:39.955 13:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:39.955 13:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.955 13:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key3 00:18:39.955 13:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.955 13:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.955 13:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.955 13:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.955 13:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.211 00:18:40.211 13:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.211 13:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.211 13:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.469 13:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.469 13:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.469 13:17:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.469 13:17:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.469 13:17:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.469 13:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.469 { 00:18:40.469 "auth": { 00:18:40.469 "dhgroup": "ffdhe3072", 00:18:40.469 "digest": "sha512", 00:18:40.469 "state": "completed" 00:18:40.469 }, 00:18:40.469 "cntlid": 119, 00:18:40.469 "listen_address": { 00:18:40.469 "adrfam": "IPv4", 00:18:40.469 "traddr": "10.0.0.2", 00:18:40.469 "trsvcid": "4420", 00:18:40.469 "trtype": "TCP" 00:18:40.469 }, 00:18:40.469 "peer_address": { 00:18:40.469 "adrfam": "IPv4", 00:18:40.469 "traddr": "10.0.0.1", 00:18:40.469 "trsvcid": "38506", 00:18:40.469 "trtype": "TCP" 00:18:40.469 }, 00:18:40.469 "qid": 0, 00:18:40.469 "state": "enabled" 00:18:40.469 } 00:18:40.469 ]' 00:18:40.469 13:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.469 13:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.469 13:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.469 13:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:40.469 13:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.727 13:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.727 13:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.727 13:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.984 13:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:03:ZWFkNmJmOGMyMjA4ZDI5NGJlZDFhYjg2ZjE4MmYyMGU1NmM3YzQ5NTkxMTJhOTk0MzVkZDU1N2Y5NGJiNjRmNd9JGNU=: 00:18:41.550 13:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.550 13:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:41.550 13:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.550 13:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.550 13:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.550 13:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.550 13:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.550 13:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:41.550 13:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:41.808 13:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:41.808 13:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.808 13:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:41.808 13:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:41.808 13:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:41.808 13:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.808 13:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.808 13:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.808 13:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.808 13:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.808 13:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.808 13:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.373 00:18:42.373 13:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.373 13:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.373 13:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.631 13:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.631 13:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.631 13:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.631 13:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.631 13:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.631 13:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.631 { 00:18:42.631 "auth": { 00:18:42.631 "dhgroup": "ffdhe4096", 00:18:42.631 "digest": "sha512", 00:18:42.631 "state": "completed" 00:18:42.631 }, 00:18:42.631 "cntlid": 121, 00:18:42.631 "listen_address": { 00:18:42.631 "adrfam": "IPv4", 00:18:42.631 "traddr": "10.0.0.2", 00:18:42.631 "trsvcid": "4420", 00:18:42.631 "trtype": "TCP" 00:18:42.631 }, 00:18:42.631 "peer_address": { 00:18:42.631 "adrfam": "IPv4", 00:18:42.631 "traddr": "10.0.0.1", 00:18:42.631 "trsvcid": "60220", 00:18:42.631 "trtype": "TCP" 00:18:42.631 }, 00:18:42.631 "qid": 0, 00:18:42.631 "state": "enabled" 00:18:42.631 } 00:18:42.631 ]' 00:18:42.631 13:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.631 13:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.631 13:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.631 13:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:42.631 13:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.631 13:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.631 13:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.631 13:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.889 13:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:00:N2Y1YzYyNDk4OThiNWI3YTJlYTRlNzNkZTJjNjFmNmY0MjE5OTU4NmYzMDBhZmUyIkAOyQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiY2YyMzRkNDNhMTc4YTM5ZmMxZDQ0YTg3ODY3NDRjYTdhMmQ0MWU5NDAyZjcyN2E2N2JjN2VlYzJiMTUwNLNbInk=: 00:18:43.821 13:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.821 13:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:43.821 13:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.821 13:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.821 13:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.821 13:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.821 13:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:43.821 13:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:43.821 13:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:43.821 13:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.822 13:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:43.822 13:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:43.822 13:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:43.822 13:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.822 13:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.822 13:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.822 13:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.822 13:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.822 13:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.822 13:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.387 00:18:44.387 13:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.387 13:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.387 13:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.645 13:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.645 13:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.645 13:17:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.645 13:17:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.645 13:17:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.645 13:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.645 { 00:18:44.645 "auth": { 00:18:44.645 "dhgroup": "ffdhe4096", 00:18:44.645 "digest": "sha512", 00:18:44.645 "state": "completed" 00:18:44.645 }, 00:18:44.645 "cntlid": 123, 00:18:44.645 "listen_address": { 00:18:44.645 "adrfam": "IPv4", 00:18:44.645 "traddr": "10.0.0.2", 00:18:44.645 "trsvcid": "4420", 00:18:44.645 "trtype": "TCP" 00:18:44.645 }, 00:18:44.645 "peer_address": { 00:18:44.645 "adrfam": "IPv4", 00:18:44.645 "traddr": "10.0.0.1", 00:18:44.645 "trsvcid": "60258", 00:18:44.645 "trtype": "TCP" 00:18:44.645 }, 00:18:44.645 "qid": 0, 00:18:44.645 "state": "enabled" 00:18:44.645 } 00:18:44.645 ]' 00:18:44.645 13:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.645 13:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.645 13:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.645 13:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:44.645 13:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.903 13:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.903 13:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.903 13:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.903 13:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:01:YTYyNDU1MmNlNDc1MThjMjYyMGZlNzhhNzdjYjY5NmGb4/uq: --dhchap-ctrl-secret DHHC-1:02:ZjUwNWMyYjMyMGIwNDI3MmRkNDM2YzIwNzU4ZGMzYWY5ZmUxZmY2NzdiZjM0YjA5rEeJRw==: 00:18:45.836 13:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.836 13:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:45.836 13:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.836 13:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.836 13:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.836 13:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.836 13:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:45.836 13:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:46.094 13:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:46.094 13:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.094 13:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:46.094 13:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:46.094 13:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:46.094 13:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.094 13:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.094 13:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.094 13:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.094 13:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.094 13:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.094 13:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.352 00:18:46.352 13:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.352 13:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.352 13:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.619 13:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.619 13:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.619 13:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.619 13:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.619 13:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.619 13:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.619 { 00:18:46.619 "auth": { 00:18:46.619 "dhgroup": "ffdhe4096", 00:18:46.619 "digest": "sha512", 00:18:46.619 "state": "completed" 00:18:46.619 }, 00:18:46.619 "cntlid": 125, 00:18:46.619 "listen_address": { 00:18:46.619 "adrfam": "IPv4", 00:18:46.619 "traddr": "10.0.0.2", 00:18:46.619 "trsvcid": "4420", 00:18:46.619 "trtype": "TCP" 00:18:46.619 }, 00:18:46.619 "peer_address": { 00:18:46.619 "adrfam": "IPv4", 00:18:46.619 "traddr": "10.0.0.1", 00:18:46.619 "trsvcid": "60276", 00:18:46.619 "trtype": "TCP" 00:18:46.619 }, 00:18:46.619 "qid": 0, 00:18:46.619 "state": "enabled" 00:18:46.619 } 00:18:46.619 ]' 00:18:46.619 13:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.619 13:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:46.619 13:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.881 13:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:46.881 13:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.881 13:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.881 13:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.881 13:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.139 13:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:02:OTg3OGFlYzU1NGIxYjI2OTRlODk0M2YxZjFmZGY1ZmIxMmJkMDdiM2UxOWZjNDRmzm1SbQ==: --dhchap-ctrl-secret DHHC-1:01:ZTYxNjU2NjQzYTE1ZmE5NTQ0MWYxZmVjNjU4NjA4OGIaIdv0: 00:18:47.704 13:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.704 13:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:47.704 13:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.704 13:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.704 13:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.704 13:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.704 13:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:47.704 13:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.269 13:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:48.269 13:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.269 13:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:48.269 13:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:48.269 13:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:48.269 13:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.269 13:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key3 00:18:48.269 13:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.269 13:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.269 13:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.270 13:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.270 13:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.547 00:18:48.547 13:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.547 13:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.547 13:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.839 13:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.839 13:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.839 13:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.839 13:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.839 13:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.839 13:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.839 { 00:18:48.839 "auth": { 00:18:48.839 "dhgroup": "ffdhe4096", 00:18:48.839 "digest": "sha512", 00:18:48.839 "state": "completed" 00:18:48.839 }, 00:18:48.839 "cntlid": 127, 00:18:48.839 "listen_address": { 00:18:48.839 "adrfam": "IPv4", 00:18:48.839 "traddr": "10.0.0.2", 00:18:48.839 "trsvcid": "4420", 00:18:48.839 "trtype": "TCP" 00:18:48.839 }, 00:18:48.839 "peer_address": { 00:18:48.839 "adrfam": "IPv4", 00:18:48.839 "traddr": "10.0.0.1", 00:18:48.839 "trsvcid": "60310", 00:18:48.839 "trtype": "TCP" 00:18:48.839 }, 00:18:48.839 "qid": 0, 00:18:48.839 "state": "enabled" 00:18:48.839 } 00:18:48.839 ]' 00:18:48.839 13:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.839 13:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:48.839 13:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.839 13:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:48.839 13:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.839 13:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.839 13:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.839 13:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.097 13:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:03:ZWFkNmJmOGMyMjA4ZDI5NGJlZDFhYjg2ZjE4MmYyMGU1NmM3YzQ5NTkxMTJhOTk0MzVkZDU1N2Y5NGJiNjRmNd9JGNU=: 00:18:50.029 13:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.029 13:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:50.029 13:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.029 13:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.029 13:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.029 13:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.029 13:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.029 13:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:50.029 13:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:50.287 13:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:50.287 13:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.287 13:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:50.287 13:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:50.287 13:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:50.287 13:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.287 13:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.288 13:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.288 13:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.288 13:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.288 13:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.288 13:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.545 00:18:50.803 13:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.803 13:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.803 13:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.061 13:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.061 13:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.061 13:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.061 13:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.061 13:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.061 13:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.061 { 00:18:51.061 "auth": { 00:18:51.061 "dhgroup": "ffdhe6144", 00:18:51.061 "digest": "sha512", 00:18:51.061 "state": "completed" 00:18:51.061 }, 00:18:51.061 "cntlid": 129, 00:18:51.061 "listen_address": { 00:18:51.061 "adrfam": "IPv4", 00:18:51.061 "traddr": "10.0.0.2", 00:18:51.061 "trsvcid": "4420", 00:18:51.061 "trtype": "TCP" 00:18:51.061 }, 00:18:51.061 "peer_address": { 00:18:51.061 "adrfam": "IPv4", 00:18:51.061 "traddr": "10.0.0.1", 00:18:51.061 "trsvcid": "42634", 00:18:51.061 "trtype": "TCP" 00:18:51.061 }, 00:18:51.061 "qid": 0, 00:18:51.061 "state": "enabled" 00:18:51.061 } 00:18:51.061 ]' 00:18:51.061 13:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.061 13:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:51.061 13:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.061 13:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:51.061 13:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.061 13:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.061 13:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.061 13:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.626 13:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:00:N2Y1YzYyNDk4OThiNWI3YTJlYTRlNzNkZTJjNjFmNmY0MjE5OTU4NmYzMDBhZmUyIkAOyQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiY2YyMzRkNDNhMTc4YTM5ZmMxZDQ0YTg3ODY3NDRjYTdhMmQ0MWU5NDAyZjcyN2E2N2JjN2VlYzJiMTUwNLNbInk=: 00:18:52.191 13:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.191 13:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:52.191 13:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.191 13:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.191 13:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.191 13:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.191 13:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:52.191 13:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:52.448 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:52.448 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.448 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:52.448 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:52.448 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:52.448 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.448 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.448 13:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.448 13:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.448 13:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.448 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.449 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.013 00:18:53.013 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.013 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.013 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.276 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.276 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.276 13:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.276 13:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.276 13:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.276 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.276 { 00:18:53.276 "auth": { 00:18:53.276 "dhgroup": "ffdhe6144", 00:18:53.276 "digest": "sha512", 00:18:53.276 "state": "completed" 00:18:53.276 }, 00:18:53.276 "cntlid": 131, 00:18:53.276 "listen_address": { 00:18:53.276 "adrfam": "IPv4", 00:18:53.276 "traddr": "10.0.0.2", 00:18:53.276 "trsvcid": "4420", 00:18:53.276 "trtype": "TCP" 00:18:53.276 }, 00:18:53.276 "peer_address": { 00:18:53.276 "adrfam": "IPv4", 00:18:53.276 "traddr": "10.0.0.1", 00:18:53.276 "trsvcid": "42670", 00:18:53.276 "trtype": "TCP" 00:18:53.276 }, 00:18:53.276 "qid": 0, 00:18:53.276 "state": "enabled" 00:18:53.276 } 00:18:53.276 ]' 00:18:53.276 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.276 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.276 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.276 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:53.276 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.276 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.276 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.276 13:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.869 13:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:01:YTYyNDU1MmNlNDc1MThjMjYyMGZlNzhhNzdjYjY5NmGb4/uq: --dhchap-ctrl-secret DHHC-1:02:ZjUwNWMyYjMyMGIwNDI3MmRkNDM2YzIwNzU4ZGMzYWY5ZmUxZmY2NzdiZjM0YjA5rEeJRw==: 00:18:54.433 13:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.433 13:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:54.433 13:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.433 13:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.433 13:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.433 13:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.433 13:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:54.433 13:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:54.690 13:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:54.690 13:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.690 13:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:54.690 13:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:54.690 13:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:54.690 13:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.690 13:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.690 13:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.690 13:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.690 13:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.690 13:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.690 13:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.947 00:18:55.205 13:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.205 13:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.205 13:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.463 13:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.464 13:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.464 13:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.464 13:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.464 13:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.464 13:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.464 { 00:18:55.464 "auth": { 00:18:55.464 "dhgroup": "ffdhe6144", 00:18:55.464 "digest": "sha512", 00:18:55.464 "state": "completed" 00:18:55.464 }, 00:18:55.464 "cntlid": 133, 00:18:55.464 "listen_address": { 00:18:55.464 "adrfam": "IPv4", 00:18:55.464 "traddr": "10.0.0.2", 00:18:55.464 "trsvcid": "4420", 00:18:55.464 "trtype": "TCP" 00:18:55.464 }, 00:18:55.464 "peer_address": { 00:18:55.464 "adrfam": "IPv4", 00:18:55.464 "traddr": "10.0.0.1", 00:18:55.464 "trsvcid": "42702", 00:18:55.464 "trtype": "TCP" 00:18:55.464 }, 00:18:55.464 "qid": 0, 00:18:55.464 "state": "enabled" 00:18:55.464 } 00:18:55.464 ]' 00:18:55.464 13:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.464 13:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.464 13:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.464 13:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:55.464 13:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.464 13:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.464 13:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.464 13:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.030 13:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:02:OTg3OGFlYzU1NGIxYjI2OTRlODk0M2YxZjFmZGY1ZmIxMmJkMDdiM2UxOWZjNDRmzm1SbQ==: --dhchap-ctrl-secret DHHC-1:01:ZTYxNjU2NjQzYTE1ZmE5NTQ0MWYxZmVjNjU4NjA4OGIaIdv0: 00:18:56.595 13:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.595 13:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:56.595 13:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.595 13:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.595 13:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.595 13:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.595 13:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:56.596 13:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:56.853 13:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:56.853 13:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.853 13:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.853 13:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:56.853 13:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:56.853 13:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.853 13:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key3 00:18:56.853 13:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.853 13:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.853 13:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.853 13:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:56.853 13:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:57.418 00:18:57.418 13:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.418 13:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.418 13:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.676 13:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.676 13:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.676 13:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.676 13:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.676 13:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.676 13:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.676 { 00:18:57.676 "auth": { 00:18:57.676 "dhgroup": "ffdhe6144", 00:18:57.676 "digest": "sha512", 00:18:57.676 "state": "completed" 00:18:57.676 }, 00:18:57.676 "cntlid": 135, 00:18:57.676 "listen_address": { 00:18:57.676 "adrfam": "IPv4", 00:18:57.676 "traddr": "10.0.0.2", 00:18:57.676 "trsvcid": "4420", 00:18:57.676 "trtype": "TCP" 00:18:57.676 }, 00:18:57.676 "peer_address": { 00:18:57.676 "adrfam": "IPv4", 00:18:57.676 "traddr": "10.0.0.1", 00:18:57.676 "trsvcid": "42728", 00:18:57.676 "trtype": "TCP" 00:18:57.676 }, 00:18:57.676 "qid": 0, 00:18:57.676 "state": "enabled" 00:18:57.676 } 00:18:57.676 ]' 00:18:57.676 13:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.676 13:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.676 13:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.676 13:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:57.676 13:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.676 13:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.676 13:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.676 13:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.242 13:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:03:ZWFkNmJmOGMyMjA4ZDI5NGJlZDFhYjg2ZjE4MmYyMGU1NmM3YzQ5NTkxMTJhOTk0MzVkZDU1N2Y5NGJiNjRmNd9JGNU=: 00:18:58.809 13:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.809 13:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:18:58.809 13:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.809 13:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.809 13:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.809 13:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.809 13:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.809 13:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:58.809 13:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:59.069 13:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:59.069 13:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.069 13:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:59.069 13:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:59.069 13:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:59.069 13:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.069 13:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.069 13:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.069 13:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.069 13:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.069 13:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.069 13:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.638 00:18:59.638 13:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.638 13:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.638 13:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.897 13:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.897 13:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.897 13:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.897 13:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.897 13:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.897 13:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.897 { 00:18:59.897 "auth": { 00:18:59.897 "dhgroup": "ffdhe8192", 00:18:59.897 "digest": "sha512", 00:18:59.897 "state": "completed" 00:18:59.897 }, 00:18:59.897 "cntlid": 137, 00:18:59.897 "listen_address": { 00:18:59.897 "adrfam": "IPv4", 00:18:59.897 "traddr": "10.0.0.2", 00:18:59.897 "trsvcid": "4420", 00:18:59.897 "trtype": "TCP" 00:18:59.897 }, 00:18:59.897 "peer_address": { 00:18:59.897 "adrfam": "IPv4", 00:18:59.897 "traddr": "10.0.0.1", 00:18:59.897 "trsvcid": "42746", 00:18:59.897 "trtype": "TCP" 00:18:59.897 }, 00:18:59.897 "qid": 0, 00:18:59.897 "state": "enabled" 00:18:59.897 } 00:18:59.897 ]' 00:18:59.897 13:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.897 13:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.897 13:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.155 13:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:00.155 13:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.155 13:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.155 13:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.155 13:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.414 13:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:00:N2Y1YzYyNDk4OThiNWI3YTJlYTRlNzNkZTJjNjFmNmY0MjE5OTU4NmYzMDBhZmUyIkAOyQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiY2YyMzRkNDNhMTc4YTM5ZmMxZDQ0YTg3ODY3NDRjYTdhMmQ0MWU5NDAyZjcyN2E2N2JjN2VlYzJiMTUwNLNbInk=: 00:19:00.980 13:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.980 13:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:19:00.980 13:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.980 13:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.980 13:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.980 13:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.980 13:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:00.980 13:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:01.546 13:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:01.546 13:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.546 13:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:01.546 13:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:01.546 13:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:01.546 13:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.546 13:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.546 13:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.546 13:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.546 13:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.546 13:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.546 13:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.114 00:19:02.114 13:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.114 13:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.114 13:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.371 13:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.371 13:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.371 13:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.371 13:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.371 13:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.371 13:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.371 { 00:19:02.371 "auth": { 00:19:02.371 "dhgroup": "ffdhe8192", 00:19:02.371 "digest": "sha512", 00:19:02.371 "state": "completed" 00:19:02.371 }, 00:19:02.371 "cntlid": 139, 00:19:02.371 "listen_address": { 00:19:02.372 "adrfam": "IPv4", 00:19:02.372 "traddr": "10.0.0.2", 00:19:02.372 "trsvcid": "4420", 00:19:02.372 "trtype": "TCP" 00:19:02.372 }, 00:19:02.372 "peer_address": { 00:19:02.372 "adrfam": "IPv4", 00:19:02.372 "traddr": "10.0.0.1", 00:19:02.372 "trsvcid": "39352", 00:19:02.372 "trtype": "TCP" 00:19:02.372 }, 00:19:02.372 "qid": 0, 00:19:02.372 "state": "enabled" 00:19:02.372 } 00:19:02.372 ]' 00:19:02.372 13:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.629 13:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.629 13:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.629 13:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:02.629 13:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.629 13:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.629 13:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.629 13:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.887 13:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:01:YTYyNDU1MmNlNDc1MThjMjYyMGZlNzhhNzdjYjY5NmGb4/uq: --dhchap-ctrl-secret DHHC-1:02:ZjUwNWMyYjMyMGIwNDI3MmRkNDM2YzIwNzU4ZGMzYWY5ZmUxZmY2NzdiZjM0YjA5rEeJRw==: 00:19:03.851 13:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.851 13:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:19:03.851 13:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.851 13:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.851 13:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.851 13:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.851 13:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:03.851 13:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:04.109 13:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:04.109 13:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.109 13:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.109 13:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:04.109 13:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:04.109 13:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.109 13:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.109 13:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.109 13:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.109 13:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.109 13:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.109 13:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.674 00:19:04.674 13:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.674 13:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.674 13:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.932 13:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.932 13:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.932 13:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.932 13:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.932 13:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.932 13:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.932 { 00:19:04.932 "auth": { 00:19:04.932 "dhgroup": "ffdhe8192", 00:19:04.932 "digest": "sha512", 00:19:04.932 "state": "completed" 00:19:04.932 }, 00:19:04.932 "cntlid": 141, 00:19:04.932 "listen_address": { 00:19:04.932 "adrfam": "IPv4", 00:19:04.932 "traddr": "10.0.0.2", 00:19:04.932 "trsvcid": "4420", 00:19:04.932 "trtype": "TCP" 00:19:04.932 }, 00:19:04.932 "peer_address": { 00:19:04.932 "adrfam": "IPv4", 00:19:04.932 "traddr": "10.0.0.1", 00:19:04.932 "trsvcid": "39386", 00:19:04.932 "trtype": "TCP" 00:19:04.932 }, 00:19:04.932 "qid": 0, 00:19:04.932 "state": "enabled" 00:19:04.932 } 00:19:04.932 ]' 00:19:04.932 13:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.189 13:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.189 13:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.189 13:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.189 13:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.189 13:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.189 13:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.189 13:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.446 13:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:02:OTg3OGFlYzU1NGIxYjI2OTRlODk0M2YxZjFmZGY1ZmIxMmJkMDdiM2UxOWZjNDRmzm1SbQ==: --dhchap-ctrl-secret DHHC-1:01:ZTYxNjU2NjQzYTE1ZmE5NTQ0MWYxZmVjNjU4NjA4OGIaIdv0: 00:19:06.380 13:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.380 13:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:19:06.380 13:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.380 13:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.380 13:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.380 13:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.380 13:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:06.380 13:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:06.638 13:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:06.638 13:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.638 13:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:06.638 13:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:06.638 13:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:06.638 13:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.638 13:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key3 00:19:06.638 13:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.638 13:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.638 13:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.638 13:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.638 13:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.203 00:19:07.203 13:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.203 13:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.203 13:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.462 13:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.462 13:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.462 13:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.462 13:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.462 13:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.462 13:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.462 { 00:19:07.462 "auth": { 00:19:07.462 "dhgroup": "ffdhe8192", 00:19:07.462 "digest": "sha512", 00:19:07.462 "state": "completed" 00:19:07.462 }, 00:19:07.462 "cntlid": 143, 00:19:07.462 "listen_address": { 00:19:07.462 "adrfam": "IPv4", 00:19:07.462 "traddr": "10.0.0.2", 00:19:07.462 "trsvcid": "4420", 00:19:07.462 "trtype": "TCP" 00:19:07.462 }, 00:19:07.462 "peer_address": { 00:19:07.462 "adrfam": "IPv4", 00:19:07.462 "traddr": "10.0.0.1", 00:19:07.462 "trsvcid": "39424", 00:19:07.462 "trtype": "TCP" 00:19:07.462 }, 00:19:07.462 "qid": 0, 00:19:07.462 "state": "enabled" 00:19:07.462 } 00:19:07.462 ]' 00:19:07.462 13:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.462 13:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.462 13:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.720 13:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:07.720 13:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.720 13:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.720 13:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.720 13:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.977 13:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:03:ZWFkNmJmOGMyMjA4ZDI5NGJlZDFhYjg2ZjE4MmYyMGU1NmM3YzQ5NTkxMTJhOTk0MzVkZDU1N2Y5NGJiNjRmNd9JGNU=: 00:19:08.544 13:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.544 13:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:19:08.544 13:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.544 13:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.544 13:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.544 13:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:08.544 13:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:08.544 13:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:08.544 13:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:08.544 13:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:08.544 13:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:08.803 13:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:08.803 13:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.803 13:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:08.803 13:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:08.803 13:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:08.803 13:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.803 13:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.803 13:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.803 13:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.803 13:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.803 13:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.803 13:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.736 00:19:09.736 13:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.736 13:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.736 13:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.993 13:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.993 13:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.993 13:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.993 13:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.993 13:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.993 13:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.993 { 00:19:09.993 "auth": { 00:19:09.993 "dhgroup": "ffdhe8192", 00:19:09.993 "digest": "sha512", 00:19:09.993 "state": "completed" 00:19:09.993 }, 00:19:09.993 "cntlid": 145, 00:19:09.993 "listen_address": { 00:19:09.993 "adrfam": "IPv4", 00:19:09.993 "traddr": "10.0.0.2", 00:19:09.993 "trsvcid": "4420", 00:19:09.993 "trtype": "TCP" 00:19:09.993 }, 00:19:09.993 "peer_address": { 00:19:09.993 "adrfam": "IPv4", 00:19:09.993 "traddr": "10.0.0.1", 00:19:09.993 "trsvcid": "39438", 00:19:09.993 "trtype": "TCP" 00:19:09.993 }, 00:19:09.993 "qid": 0, 00:19:09.993 "state": "enabled" 00:19:09.993 } 00:19:09.993 ]' 00:19:09.993 13:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.993 13:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.993 13:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.993 13:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:09.993 13:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.993 13:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.993 13:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.993 13:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.555 13:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:00:N2Y1YzYyNDk4OThiNWI3YTJlYTRlNzNkZTJjNjFmNmY0MjE5OTU4NmYzMDBhZmUyIkAOyQ==: --dhchap-ctrl-secret DHHC-1:03:NWNiY2YyMzRkNDNhMTc4YTM5ZmMxZDQ0YTg3ODY3NDRjYTdhMmQ0MWU5NDAyZjcyN2E2N2JjN2VlYzJiMTUwNLNbInk=: 00:19:11.120 13:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.120 13:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:19:11.120 13:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.120 13:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.120 13:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.120 13:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key1 00:19:11.120 13:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.120 13:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.120 13:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.120 13:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:11.120 13:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:11.120 13:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:11.120 13:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:11.120 13:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:11.120 13:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:11.120 13:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:11.120 13:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:11.120 13:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:11.685 2024/07/15 13:18:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:11.685 request: 00:19:11.685 { 00:19:11.685 "method": "bdev_nvme_attach_controller", 00:19:11.685 "params": { 00:19:11.685 "name": "nvme0", 00:19:11.685 "trtype": "tcp", 00:19:11.685 "traddr": "10.0.0.2", 00:19:11.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02", 00:19:11.685 "adrfam": "ipv4", 00:19:11.685 "trsvcid": "4420", 00:19:11.685 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:11.685 "dhchap_key": "key2" 00:19:11.685 } 00:19:11.685 } 00:19:11.685 Got JSON-RPC error response 00:19:11.685 GoRPCClient: error on JSON-RPC call 00:19:11.685 13:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:11.685 13:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:11.685 13:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:11.685 13:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:11.685 13:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:19:11.685 13:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.685 13:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.685 13:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.685 13:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.685 13:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.685 13:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.943 13:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.943 13:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:11.943 13:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:11.943 13:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:11.943 13:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:11.943 13:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:11.943 13:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:11.943 13:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:11.943 13:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:11.943 13:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:12.508 2024/07/15 13:18:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_ctrlr_key:ckey2 dhchap_key:key1 hostnqn:nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:12.508 request: 00:19:12.508 { 00:19:12.508 "method": "bdev_nvme_attach_controller", 00:19:12.508 "params": { 00:19:12.508 "name": "nvme0", 00:19:12.508 "trtype": "tcp", 00:19:12.508 "traddr": "10.0.0.2", 00:19:12.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02", 00:19:12.508 "adrfam": "ipv4", 00:19:12.508 "trsvcid": "4420", 00:19:12.508 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:12.508 "dhchap_key": "key1", 00:19:12.508 "dhchap_ctrlr_key": "ckey2" 00:19:12.508 } 00:19:12.508 } 00:19:12.508 Got JSON-RPC error response 00:19:12.508 GoRPCClient: error on JSON-RPC call 00:19:12.508 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:12.508 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:12.508 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:12.508 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:12.508 13:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:19:12.508 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.508 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.508 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.508 13:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key1 00:19:12.508 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.508 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.508 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.508 13:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.508 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:12.508 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.508 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:12.508 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:12.508 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:12.509 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:12.509 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.509 13:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.074 2024/07/15 13:18:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_ctrlr_key:ckey1 dhchap_key:key1 hostnqn:nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:13.074 request: 00:19:13.074 { 00:19:13.074 "method": "bdev_nvme_attach_controller", 00:19:13.074 "params": { 00:19:13.074 "name": "nvme0", 00:19:13.074 "trtype": "tcp", 00:19:13.074 "traddr": "10.0.0.2", 00:19:13.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02", 00:19:13.074 "adrfam": "ipv4", 00:19:13.074 "trsvcid": "4420", 00:19:13.074 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:13.074 "dhchap_key": "key1", 00:19:13.074 "dhchap_ctrlr_key": "ckey1" 00:19:13.074 } 00:19:13.074 } 00:19:13.074 Got JSON-RPC error response 00:19:13.074 GoRPCClient: error on JSON-RPC call 00:19:13.074 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:13.074 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:13.074 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:13.074 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:13.074 13:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:19:13.074 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.074 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.074 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.074 13:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 93332 00:19:13.074 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 93332 ']' 00:19:13.074 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 93332 00:19:13.074 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:19:13.074 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:13.074 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93332 00:19:13.074 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:13.074 killing process with pid 93332 00:19:13.074 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:13.074 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93332' 00:19:13.074 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 93332 00:19:13.074 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 93332 00:19:13.331 13:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:13.331 13:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:13.331 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:13.331 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.331 13:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=98240 00:19:13.331 13:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 98240 00:19:13.331 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 98240 ']' 00:19:13.331 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.331 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:13.331 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.331 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:13.331 13:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.331 13:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:14.748 13:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:14.748 13:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:14.748 13:18:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:14.748 13:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:14.748 13:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.748 13:18:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.748 13:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:14.748 13:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 98240 00:19:14.748 13:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 98240 ']' 00:19:14.748 13:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.748 13:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:14.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.748 13:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.748 13:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:14.748 13:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.748 13:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:14.748 13:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:14.748 13:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:14.748 13:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.748 13:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.005 13:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.005 13:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:15.005 13:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.005 13:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.005 13:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:15.005 13:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:15.005 13:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.005 13:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key3 00:19:15.005 13:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.005 13:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.005 13:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.005 13:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.005 13:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.588 00:19:15.588 13:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.588 13:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.588 13:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.846 13:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.846 13:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.846 13:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.846 13:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.846 13:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.846 13:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.846 { 00:19:15.846 "auth": { 00:19:15.846 "dhgroup": "ffdhe8192", 00:19:15.846 "digest": "sha512", 00:19:15.846 "state": "completed" 00:19:15.846 }, 00:19:15.846 "cntlid": 1, 00:19:15.846 "listen_address": { 00:19:15.846 "adrfam": "IPv4", 00:19:15.846 "traddr": "10.0.0.2", 00:19:15.846 "trsvcid": "4420", 00:19:15.846 "trtype": "TCP" 00:19:15.846 }, 00:19:15.846 "peer_address": { 00:19:15.846 "adrfam": "IPv4", 00:19:15.846 "traddr": "10.0.0.1", 00:19:15.846 "trsvcid": "38318", 00:19:15.846 "trtype": "TCP" 00:19:15.846 }, 00:19:15.846 "qid": 0, 00:19:15.846 "state": "enabled" 00:19:15.846 } 00:19:15.846 ]' 00:19:15.846 13:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.846 13:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.846 13:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.103 13:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:16.103 13:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.103 13:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.103 13:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.103 13:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.361 13:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-secret DHHC-1:03:ZWFkNmJmOGMyMjA4ZDI5NGJlZDFhYjg2ZjE4MmYyMGU1NmM3YzQ5NTkxMTJhOTk0MzVkZDU1N2Y5NGJiNjRmNd9JGNU=: 00:19:16.923 13:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.923 13:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:19:16.923 13:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.923 13:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.923 13:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.923 13:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --dhchap-key key3 00:19:16.923 13:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.923 13:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.923 13:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.923 13:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:16.923 13:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:17.215 13:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.215 13:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:17.215 13:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.215 13:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:17.215 13:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:17.215 13:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:17.215 13:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:17.215 13:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.215 13:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.472 2024/07/15 13:18:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key3 hostnqn:nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:17.472 request: 00:19:17.472 { 00:19:17.472 "method": "bdev_nvme_attach_controller", 00:19:17.472 "params": { 00:19:17.472 "name": "nvme0", 00:19:17.472 "trtype": "tcp", 00:19:17.472 "traddr": "10.0.0.2", 00:19:17.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02", 00:19:17.472 "adrfam": "ipv4", 00:19:17.472 "trsvcid": "4420", 00:19:17.472 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:17.472 "dhchap_key": "key3" 00:19:17.472 } 00:19:17.472 } 00:19:17.472 Got JSON-RPC error response 00:19:17.472 GoRPCClient: error on JSON-RPC call 00:19:17.472 13:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:17.472 13:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:17.472 13:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:17.472 13:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:17.472 13:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:17.472 13:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:17.472 13:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:17.472 13:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:17.729 13:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.729 13:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:17.729 13:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.729 13:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:17.729 13:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:17.729 13:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:17.729 13:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:17.729 13:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.729 13:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.987 2024/07/15 13:18:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key3 hostnqn:nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:17.987 request: 00:19:17.987 { 00:19:17.987 "method": "bdev_nvme_attach_controller", 00:19:17.987 "params": { 00:19:17.987 "name": "nvme0", 00:19:17.987 "trtype": "tcp", 00:19:17.987 "traddr": "10.0.0.2", 00:19:17.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02", 00:19:17.987 "adrfam": "ipv4", 00:19:17.987 "trsvcid": "4420", 00:19:17.987 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:17.987 "dhchap_key": "key3" 00:19:17.987 } 00:19:17.987 } 00:19:17.987 Got JSON-RPC error response 00:19:17.987 GoRPCClient: error on JSON-RPC call 00:19:18.244 13:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:18.244 13:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:18.244 13:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:18.244 13:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:18.244 13:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:18.244 13:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:18.244 13:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:18.244 13:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:18.244 13:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:18.244 13:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:18.501 13:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:19:18.501 13:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.501 13:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.501 13:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.502 13:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:19:18.502 13:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.502 13:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.502 13:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.502 13:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:18.502 13:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:18.502 13:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:18.502 13:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:18.502 13:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:18.502 13:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:18.502 13:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:18.502 13:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:18.502 13:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:18.759 2024/07/15 13:18:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_ctrlr_key:key1 dhchap_key:key0 hostnqn:nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:18.759 request: 00:19:18.759 { 00:19:18.759 "method": "bdev_nvme_attach_controller", 00:19:18.759 "params": { 00:19:18.759 "name": "nvme0", 00:19:18.759 "trtype": "tcp", 00:19:18.759 "traddr": "10.0.0.2", 00:19:18.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02", 00:19:18.759 "adrfam": "ipv4", 00:19:18.759 "trsvcid": "4420", 00:19:18.759 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:18.759 "dhchap_key": "key0", 00:19:18.759 "dhchap_ctrlr_key": "key1" 00:19:18.759 } 00:19:18.759 } 00:19:18.759 Got JSON-RPC error response 00:19:18.759 GoRPCClient: error on JSON-RPC call 00:19:18.759 13:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:18.759 13:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:18.759 13:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:18.759 13:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:18.759 13:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:18.759 13:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:19.016 00:19:19.016 13:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:19.016 13:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:19.016 13:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.278 13:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.278 13:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.278 13:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.543 13:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:19.543 13:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:19.544 13:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 93376 00:19:19.544 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 93376 ']' 00:19:19.544 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 93376 00:19:19.544 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:19:19.544 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:19.544 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93376 00:19:19.544 killing process with pid 93376 00:19:19.544 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:19.544 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:19.544 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93376' 00:19:19.544 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 93376 00:19:19.544 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 93376 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:20.108 rmmod nvme_tcp 00:19:20.108 rmmod nvme_fabrics 00:19:20.108 rmmod nvme_keyring 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 98240 ']' 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 98240 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 98240 ']' 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 98240 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 98240 00:19:20.108 killing process with pid 98240 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 98240' 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 98240 00:19:20.108 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 98240 00:19:20.366 13:18:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:20.366 13:18:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:20.366 13:18:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:20.366 13:18:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:20.366 13:18:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:20.366 13:18:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.366 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.366 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.366 13:18:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:20.366 13:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.DoG /tmp/spdk.key-sha256.DKk /tmp/spdk.key-sha384.Tkw /tmp/spdk.key-sha512.Tin /tmp/spdk.key-sha512.iWT /tmp/spdk.key-sha384.gap /tmp/spdk.key-sha256.9Q6 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:19:20.366 00:19:20.366 real 2m56.892s 00:19:20.366 user 7m10.070s 00:19:20.366 sys 0m22.811s 00:19:20.366 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:20.366 13:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.366 ************************************ 00:19:20.366 END TEST nvmf_auth_target 00:19:20.366 ************************************ 00:19:20.366 13:18:17 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:20.366 13:18:17 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:20.366 13:18:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:20.366 13:18:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:20.366 13:18:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:20.366 ************************************ 00:19:20.366 START TEST nvmf_bdevio_no_huge 00:19:20.366 ************************************ 00:19:20.366 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:20.624 * Looking for test storage... 00:19:20.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.624 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:20.625 Cannot find device "nvmf_tgt_br" 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:20.625 Cannot find device "nvmf_tgt_br2" 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:20.625 Cannot find device "nvmf_tgt_br" 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:20.625 Cannot find device "nvmf_tgt_br2" 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:20.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:20.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:20.625 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:20.883 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:20.883 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:20.883 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:20.883 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:20.883 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:20.883 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:20.883 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:20.883 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:20.883 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:20.883 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:20.883 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:20.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:19:20.884 00:19:20.884 --- 10.0.0.2 ping statistics --- 00:19:20.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.884 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:20.884 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:20.884 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:19:20.884 00:19:20.884 --- 10.0.0.3 ping statistics --- 00:19:20.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.884 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:20.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:19:20.884 00:19:20.884 --- 10.0.0.1 ping statistics --- 00:19:20.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.884 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=98645 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 98645 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 98645 ']' 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:20.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:20.884 13:18:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.884 [2024-07-15 13:18:17.582655] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:20.884 [2024-07-15 13:18:17.582783] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:21.141 [2024-07-15 13:18:17.727854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:21.141 [2024-07-15 13:18:17.827376] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.141 [2024-07-15 13:18:17.827431] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.141 [2024-07-15 13:18:17.827443] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.141 [2024-07-15 13:18:17.827451] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.141 [2024-07-15 13:18:17.827459] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.141 [2024-07-15 13:18:17.828179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:21.141 [2024-07-15 13:18:17.828299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:21.141 [2024-07-15 13:18:17.831242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:21.141 [2024-07-15 13:18:17.831265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:22.073 [2024-07-15 13:18:18.653766] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:22.073 Malloc0 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:22.073 [2024-07-15 13:18:18.705788] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:22.073 { 00:19:22.073 "params": { 00:19:22.073 "name": "Nvme$subsystem", 00:19:22.073 "trtype": "$TEST_TRANSPORT", 00:19:22.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:22.073 "adrfam": "ipv4", 00:19:22.073 "trsvcid": "$NVMF_PORT", 00:19:22.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:22.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:22.073 "hdgst": ${hdgst:-false}, 00:19:22.073 "ddgst": ${ddgst:-false} 00:19:22.073 }, 00:19:22.073 "method": "bdev_nvme_attach_controller" 00:19:22.073 } 00:19:22.073 EOF 00:19:22.073 )") 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:22.073 13:18:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:22.073 "params": { 00:19:22.073 "name": "Nvme1", 00:19:22.073 "trtype": "tcp", 00:19:22.073 "traddr": "10.0.0.2", 00:19:22.073 "adrfam": "ipv4", 00:19:22.073 "trsvcid": "4420", 00:19:22.073 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.073 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:22.073 "hdgst": false, 00:19:22.073 "ddgst": false 00:19:22.073 }, 00:19:22.073 "method": "bdev_nvme_attach_controller" 00:19:22.073 }' 00:19:22.073 [2024-07-15 13:18:18.767579] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:22.073 [2024-07-15 13:18:18.767655] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid98699 ] 00:19:22.331 [2024-07-15 13:18:18.900301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:22.331 [2024-07-15 13:18:19.054358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.331 [2024-07-15 13:18:19.056243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.331 [2024-07-15 13:18:19.056278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.588 I/O targets: 00:19:22.588 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:22.588 00:19:22.588 00:19:22.588 CUnit - A unit testing framework for C - Version 2.1-3 00:19:22.588 http://cunit.sourceforge.net/ 00:19:22.588 00:19:22.588 00:19:22.589 Suite: bdevio tests on: Nvme1n1 00:19:22.589 Test: blockdev write read block ...passed 00:19:22.846 Test: blockdev write zeroes read block ...passed 00:19:22.846 Test: blockdev write zeroes read no split ...passed 00:19:22.846 Test: blockdev write zeroes read split ...passed 00:19:22.846 Test: blockdev write zeroes read split partial ...passed 00:19:22.846 Test: blockdev reset ...[2024-07-15 13:18:19.368453] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:22.846 [2024-07-15 13:18:19.368754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe47240 (9): Bad file descriptor 00:19:22.846 [2024-07-15 13:18:19.382043] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:22.846 passed 00:19:22.846 Test: blockdev write read 8 blocks ...passed 00:19:22.846 Test: blockdev write read size > 128k ...passed 00:19:22.846 Test: blockdev write read invalid size ...passed 00:19:22.846 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:22.846 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:22.846 Test: blockdev write read max offset ...passed 00:19:22.846 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:22.846 Test: blockdev writev readv 8 blocks ...passed 00:19:22.846 Test: blockdev writev readv 30 x 1block ...passed 00:19:22.846 Test: blockdev writev readv block ...passed 00:19:22.846 Test: blockdev writev readv size > 128k ...passed 00:19:22.846 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:22.846 Test: blockdev comparev and writev ...[2024-07-15 13:18:19.558596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.846 [2024-07-15 13:18:19.558670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:22.846 [2024-07-15 13:18:19.558696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.846 [2024-07-15 13:18:19.558711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.846 [2024-07-15 13:18:19.559032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.846 [2024-07-15 13:18:19.559059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:22.846 [2024-07-15 13:18:19.559088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.846 [2024-07-15 13:18:19.559100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:22.846 [2024-07-15 13:18:19.559744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.846 [2024-07-15 13:18:19.559780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:22.846 [2024-07-15 13:18:19.559803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.846 [2024-07-15 13:18:19.559815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:22.846 [2024-07-15 13:18:19.560141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.846 [2024-07-15 13:18:19.560172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:22.846 [2024-07-15 13:18:19.560194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.846 [2024-07-15 13:18:19.560219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:23.104 passed 00:19:23.104 Test: blockdev nvme passthru rw ...passed 00:19:23.104 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:18:19.644680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:23.104 [2024-07-15 13:18:19.644727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:23.104 [2024-07-15 13:18:19.644860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:23.104 [2024-07-15 13:18:19.644880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:23.104 passed 00:19:23.104 Test: blockdev nvme admin passthru ...[2024-07-15 13:18:19.645002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:23.104 [2024-07-15 13:18:19.645029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:23.104 [2024-07-15 13:18:19.645149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:23.104 [2024-07-15 13:18:19.645168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:23.104 passed 00:19:23.104 Test: blockdev copy ...passed 00:19:23.104 00:19:23.104 Run Summary: Type Total Ran Passed Failed Inactive 00:19:23.104 suites 1 1 n/a 0 0 00:19:23.104 tests 23 23 23 0 0 00:19:23.104 asserts 152 152 152 0 n/a 00:19:23.104 00:19:23.104 Elapsed time = 0.915 seconds 00:19:23.361 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:23.361 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.361 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:23.361 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.361 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:23.361 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:23.361 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:23.361 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:23.361 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:23.361 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:23.361 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:23.361 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:23.361 rmmod nvme_tcp 00:19:23.619 rmmod nvme_fabrics 00:19:23.619 rmmod nvme_keyring 00:19:23.619 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:23.619 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:23.619 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:23.619 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 98645 ']' 00:19:23.619 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 98645 00:19:23.619 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 98645 ']' 00:19:23.619 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 98645 00:19:23.619 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:19:23.619 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:23.619 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 98645 00:19:23.619 killing process with pid 98645 00:19:23.619 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:23.619 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:23.619 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 98645' 00:19:23.619 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 98645 00:19:23.619 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 98645 00:19:23.877 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:23.877 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:23.877 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:23.877 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:23.877 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:23.877 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.877 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.877 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.877 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:23.877 ************************************ 00:19:23.877 END TEST nvmf_bdevio_no_huge 00:19:23.877 ************************************ 00:19:23.877 00:19:23.877 real 0m3.550s 00:19:23.877 user 0m12.658s 00:19:23.877 sys 0m1.364s 00:19:23.877 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:23.877 13:18:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.135 13:18:20 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:24.135 13:18:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:24.135 13:18:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:24.135 13:18:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:24.135 ************************************ 00:19:24.135 START TEST nvmf_tls 00:19:24.135 ************************************ 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:24.135 * Looking for test storage... 00:19:24.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:24.135 Cannot find device "nvmf_tgt_br" 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:24.135 Cannot find device "nvmf_tgt_br2" 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:24.135 Cannot find device "nvmf_tgt_br" 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:24.135 Cannot find device "nvmf_tgt_br2" 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:24.135 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:24.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:24.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:24.397 13:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:24.397 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:24.397 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:24.397 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:24.397 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:24.397 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:24.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:19:24.397 00:19:24.397 --- 10.0.0.2 ping statistics --- 00:19:24.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.397 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:19:24.397 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:24.397 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:24.397 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:19:24.397 00:19:24.397 --- 10.0.0.3 ping statistics --- 00:19:24.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.398 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:24.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:19:24.398 00:19:24.398 --- 10.0.0.1 ping statistics --- 00:19:24.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.398 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=98884 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 98884 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 98884 ']' 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:24.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:24.398 13:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.655 [2024-07-15 13:18:21.145395] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:24.655 [2024-07-15 13:18:21.145627] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.655 [2024-07-15 13:18:21.282507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.655 [2024-07-15 13:18:21.377387] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.655 [2024-07-15 13:18:21.377453] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.655 [2024-07-15 13:18:21.377468] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.655 [2024-07-15 13:18:21.377479] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.655 [2024-07-15 13:18:21.377489] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.655 [2024-07-15 13:18:21.377520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.912 13:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:24.912 13:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:24.912 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:24.912 13:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:24.912 13:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.912 13:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.912 13:18:21 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:24.912 13:18:21 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:25.169 true 00:19:25.169 13:18:21 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:25.169 13:18:21 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:25.426 13:18:21 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:25.426 13:18:21 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:25.426 13:18:21 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:25.683 13:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:25.683 13:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:25.940 13:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:25.940 13:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:25.940 13:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:26.197 13:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:26.197 13:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:26.455 13:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:26.455 13:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:26.455 13:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:26.455 13:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:26.713 13:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:26.713 13:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:26.713 13:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:26.971 13:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:26.971 13:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:27.229 13:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:27.229 13:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:27.229 13:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:27.489 13:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:27.489 13:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:27.747 13:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:27.747 13:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:27.747 13:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:27.747 13:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:27.747 13:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:27.747 13:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:27.747 13:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:27.747 13:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:27.747 13:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:27.747 13:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:27.747 13:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:27.747 13:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:27.747 13:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:27.747 13:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:27.747 13:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:27.747 13:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:27.747 13:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:28.073 13:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:28.073 13:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:28.073 13:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.9camjkBE6O 00:19:28.073 13:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:28.073 13:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.l9HF9KAl3d 00:19:28.073 13:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:28.073 13:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:28.073 13:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.9camjkBE6O 00:19:28.073 13:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.l9HF9KAl3d 00:19:28.073 13:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:28.331 13:18:24 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:28.590 13:18:25 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.9camjkBE6O 00:19:28.590 13:18:25 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9camjkBE6O 00:19:28.590 13:18:25 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:28.848 [2024-07-15 13:18:25.429768] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.848 13:18:25 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:29.105 13:18:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:29.362 [2024-07-15 13:18:26.041909] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:29.362 [2024-07-15 13:18:26.042123] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.362 13:18:26 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:29.927 malloc0 00:19:29.927 13:18:26 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:29.927 13:18:26 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9camjkBE6O 00:19:30.185 [2024-07-15 13:18:26.845153] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:30.185 13:18:26 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.9camjkBE6O 00:19:42.377 Initializing NVMe Controllers 00:19:42.377 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:42.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:42.377 Initialization complete. Launching workers. 00:19:42.377 ======================================================== 00:19:42.377 Latency(us) 00:19:42.377 Device Information : IOPS MiB/s Average min max 00:19:42.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9100.28 35.55 7035.14 1480.32 14246.42 00:19:42.377 ======================================================== 00:19:42.377 Total : 9100.28 35.55 7035.14 1480.32 14246.42 00:19:42.377 00:19:42.377 13:18:37 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9camjkBE6O 00:19:42.377 13:18:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:42.377 13:18:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:42.377 13:18:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:42.377 13:18:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9camjkBE6O' 00:19:42.377 13:18:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:42.377 13:18:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99222 00:19:42.377 13:18:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:42.377 13:18:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:42.377 13:18:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99222 /var/tmp/bdevperf.sock 00:19:42.377 13:18:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99222 ']' 00:19:42.377 13:18:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.377 13:18:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:42.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.377 13:18:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.377 13:18:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:42.377 13:18:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.377 [2024-07-15 13:18:37.101285] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:42.377 [2024-07-15 13:18:37.101383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99222 ] 00:19:42.377 [2024-07-15 13:18:37.238309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.377 [2024-07-15 13:18:37.338339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.377 13:18:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:42.377 13:18:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:42.377 13:18:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9camjkBE6O 00:19:42.377 [2024-07-15 13:18:38.419563] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.377 [2024-07-15 13:18:38.419702] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:42.377 TLSTESTn1 00:19:42.377 13:18:38 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:42.377 Running I/O for 10 seconds... 00:19:52.353 00:19:52.353 Latency(us) 00:19:52.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.353 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:52.353 Verification LBA range: start 0x0 length 0x2000 00:19:52.353 TLSTESTn1 : 10.02 3856.45 15.06 0.00 0.00 33118.43 7298.33 30146.56 00:19:52.353 =================================================================================================================== 00:19:52.353 Total : 3856.45 15.06 0.00 0.00 33118.43 7298.33 30146.56 00:19:52.353 0 00:19:52.353 13:18:48 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:52.353 13:18:48 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 99222 00:19:52.353 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99222 ']' 00:19:52.353 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99222 00:19:52.353 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:52.353 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:52.353 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99222 00:19:52.353 killing process with pid 99222 00:19:52.353 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:52.353 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:52.353 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99222' 00:19:52.353 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99222 00:19:52.353 Received shutdown signal, test time was about 10.000000 seconds 00:19:52.353 00:19:52.353 Latency(us) 00:19:52.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.353 =================================================================================================================== 00:19:52.353 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:52.354 [2024-07-15 13:18:48.704541] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99222 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.l9HF9KAl3d 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.l9HF9KAl3d 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.l9HF9KAl3d 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.l9HF9KAl3d' 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99374 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99374 /var/tmp/bdevperf.sock 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99374 ']' 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:52.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:52.354 13:18:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.354 [2024-07-15 13:18:48.967058] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:52.354 [2024-07-15 13:18:48.967152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99374 ] 00:19:52.612 [2024-07-15 13:18:49.099992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.612 [2024-07-15 13:18:49.194577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.546 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:53.546 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:53.546 13:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.l9HF9KAl3d 00:19:53.802 [2024-07-15 13:18:50.351134] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:53.803 [2024-07-15 13:18:50.351298] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:53.803 [2024-07-15 13:18:50.358819] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:53.803 [2024-07-15 13:18:50.359237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf96440 (107): Transport endpoint is not connected 00:19:53.803 [2024-07-15 13:18:50.360221] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf96440 (9): Bad file descriptor 00:19:53.803 [2024-07-15 13:18:50.361215] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:53.803 [2024-07-15 13:18:50.361240] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:53.803 [2024-07-15 13:18:50.361257] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:53.803 2024/07/15 13:18:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.l9HF9KAl3d subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:53.803 request: 00:19:53.803 { 00:19:53.803 "method": "bdev_nvme_attach_controller", 00:19:53.803 "params": { 00:19:53.803 "name": "TLSTEST", 00:19:53.803 "trtype": "tcp", 00:19:53.803 "traddr": "10.0.0.2", 00:19:53.803 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:53.803 "adrfam": "ipv4", 00:19:53.803 "trsvcid": "4420", 00:19:53.803 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.803 "psk": "/tmp/tmp.l9HF9KAl3d" 00:19:53.803 } 00:19:53.803 } 00:19:53.803 Got JSON-RPC error response 00:19:53.803 GoRPCClient: error on JSON-RPC call 00:19:53.803 13:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99374 00:19:53.803 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99374 ']' 00:19:53.803 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99374 00:19:53.803 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:53.803 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:53.803 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99374 00:19:53.803 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:53.803 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:53.803 killing process with pid 99374 00:19:53.803 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99374' 00:19:53.803 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99374 00:19:53.803 Received shutdown signal, test time was about 10.000000 seconds 00:19:53.803 00:19:53.803 Latency(us) 00:19:53.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.803 =================================================================================================================== 00:19:53.803 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:53.803 [2024-07-15 13:18:50.406176] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:53.803 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99374 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9camjkBE6O 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9camjkBE6O 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9camjkBE6O 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9camjkBE6O' 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99420 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99420 /var/tmp/bdevperf.sock 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99420 ']' 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:54.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:54.062 13:18:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.062 [2024-07-15 13:18:50.675301] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:54.062 [2024-07-15 13:18:50.675410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99420 ] 00:19:54.321 [2024-07-15 13:18:50.819615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.321 [2024-07-15 13:18:50.907858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.255 13:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:55.255 13:18:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:55.255 13:18:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.9camjkBE6O 00:19:55.255 [2024-07-15 13:18:51.977869] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.255 [2024-07-15 13:18:51.977982] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:55.255 [2024-07-15 13:18:51.988694] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:55.255 [2024-07-15 13:18:51.988731] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:55.255 [2024-07-15 13:18:51.988785] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:55.255 [2024-07-15 13:18:51.989535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0a440 (107): Transport endpoint is not connected 00:19:55.255 [2024-07-15 13:18:51.990525] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0a440 (9): Bad file descriptor 00:19:55.255 [2024-07-15 13:18:51.991522] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:55.255 [2024-07-15 13:18:51.991546] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:55.255 [2024-07-15 13:18:51.991564] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:55.513 2024/07/15 13:18:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/tmp/tmp.9camjkBE6O subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:55.513 request: 00:19:55.513 { 00:19:55.513 "method": "bdev_nvme_attach_controller", 00:19:55.513 "params": { 00:19:55.513 "name": "TLSTEST", 00:19:55.513 "trtype": "tcp", 00:19:55.513 "traddr": "10.0.0.2", 00:19:55.513 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:55.513 "adrfam": "ipv4", 00:19:55.513 "trsvcid": "4420", 00:19:55.513 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.513 "psk": "/tmp/tmp.9camjkBE6O" 00:19:55.513 } 00:19:55.513 } 00:19:55.513 Got JSON-RPC error response 00:19:55.513 GoRPCClient: error on JSON-RPC call 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99420 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99420 ']' 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99420 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99420 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:55.513 killing process with pid 99420 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99420' 00:19:55.513 Received shutdown signal, test time was about 10.000000 seconds 00:19:55.513 00:19:55.513 Latency(us) 00:19:55.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.513 =================================================================================================================== 00:19:55.513 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99420 00:19:55.513 [2024-07-15 13:18:52.039029] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99420 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9camjkBE6O 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9camjkBE6O 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9camjkBE6O 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9camjkBE6O' 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99465 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99465 /var/tmp/bdevperf.sock 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99465 ']' 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:55.513 13:18:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.771 [2024-07-15 13:18:52.280603] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:55.771 [2024-07-15 13:18:52.280679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99465 ] 00:19:55.771 [2024-07-15 13:18:52.411863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.771 [2024-07-15 13:18:52.508562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.724 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:56.724 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:56.724 13:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9camjkBE6O 00:19:56.981 [2024-07-15 13:18:53.615568] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.981 [2024-07-15 13:18:53.615745] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:56.982 [2024-07-15 13:18:53.621850] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:56.982 [2024-07-15 13:18:53.621905] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:56.982 [2024-07-15 13:18:53.621972] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:56.982 [2024-07-15 13:18:53.622451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e5440 (107): Transport endpoint is not connected 00:19:56.982 [2024-07-15 13:18:53.623437] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e5440 (9): Bad file descriptor 00:19:56.982 [2024-07-15 13:18:53.624432] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:56.982 [2024-07-15 13:18:53.624465] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:56.982 [2024-07-15 13:18:53.624486] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:56.982 2024/07/15 13:18:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.9camjkBE6O subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:56.982 request: 00:19:56.982 { 00:19:56.982 "method": "bdev_nvme_attach_controller", 00:19:56.982 "params": { 00:19:56.982 "name": "TLSTEST", 00:19:56.982 "trtype": "tcp", 00:19:56.982 "traddr": "10.0.0.2", 00:19:56.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:56.982 "adrfam": "ipv4", 00:19:56.982 "trsvcid": "4420", 00:19:56.982 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:56.982 "psk": "/tmp/tmp.9camjkBE6O" 00:19:56.982 } 00:19:56.982 } 00:19:56.982 Got JSON-RPC error response 00:19:56.982 GoRPCClient: error on JSON-RPC call 00:19:56.982 13:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99465 00:19:56.982 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99465 ']' 00:19:56.982 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99465 00:19:56.982 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:56.982 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:56.982 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99465 00:19:56.982 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:56.982 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:56.982 killing process with pid 99465 00:19:56.982 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99465' 00:19:56.982 Received shutdown signal, test time was about 10.000000 seconds 00:19:56.982 00:19:56.982 Latency(us) 00:19:56.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.982 =================================================================================================================== 00:19:56.982 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:56.982 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99465 00:19:56.982 [2024-07-15 13:18:53.684492] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:56.982 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99465 00:19:57.239 13:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:57.239 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:57.239 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:57.239 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:57.239 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:57.239 13:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:57.239 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:57.239 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:57.239 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:57.239 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:57.239 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:57.239 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:57.239 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:57.239 13:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:57.240 13:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:57.240 13:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:57.240 13:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:57.240 13:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:57.240 13:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99510 00:19:57.240 13:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:57.240 13:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:57.240 13:18:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99510 /var/tmp/bdevperf.sock 00:19:57.240 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99510 ']' 00:19:57.240 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.240 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:57.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.240 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.240 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:57.240 13:18:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.240 [2024-07-15 13:18:53.969068] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:57.240 [2024-07-15 13:18:53.969283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99510 ] 00:19:57.497 [2024-07-15 13:18:54.113862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.753 [2024-07-15 13:18:54.258480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.684 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:58.684 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:58.684 13:18:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:58.685 [2024-07-15 13:18:55.389574] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:58.685 [2024-07-15 13:18:55.391540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f8aa0 (9): Bad file descriptor 00:19:58.685 [2024-07-15 13:18:55.392524] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:58.685 [2024-07-15 13:18:55.392556] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:58.685 [2024-07-15 13:18:55.392576] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:58.685 2024/07/15 13:18:55 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:58.685 request: 00:19:58.685 { 00:19:58.685 "method": "bdev_nvme_attach_controller", 00:19:58.685 "params": { 00:19:58.685 "name": "TLSTEST", 00:19:58.685 "trtype": "tcp", 00:19:58.685 "traddr": "10.0.0.2", 00:19:58.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:58.685 "adrfam": "ipv4", 00:19:58.685 "trsvcid": "4420", 00:19:58.685 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:19:58.685 } 00:19:58.685 } 00:19:58.685 Got JSON-RPC error response 00:19:58.685 GoRPCClient: error on JSON-RPC call 00:19:58.685 13:18:55 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99510 00:19:58.685 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99510 ']' 00:19:58.685 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99510 00:19:58.685 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:58.685 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:58.685 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99510 00:19:58.942 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:58.942 killing process with pid 99510 00:19:58.942 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:58.942 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99510' 00:19:58.942 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99510 00:19:58.942 Received shutdown signal, test time was about 10.000000 seconds 00:19:58.942 00:19:58.942 Latency(us) 00:19:58.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.942 =================================================================================================================== 00:19:58.942 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:58.942 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99510 00:19:59.198 13:18:55 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:59.198 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:59.198 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:59.198 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:59.198 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:59.198 13:18:55 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 98884 00:19:59.198 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 98884 ']' 00:19:59.198 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 98884 00:19:59.198 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:59.198 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:59.198 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 98884 00:19:59.198 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:59.198 killing process with pid 98884 00:19:59.198 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:59.198 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 98884' 00:19:59.198 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 98884 00:19:59.198 [2024-07-15 13:18:55.789328] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:59.198 13:18:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 98884 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.lTia0AerDv 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.lTia0AerDv 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=99571 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 99571 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99571 ']' 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:59.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:59.456 13:18:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.456 [2024-07-15 13:18:56.157020] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:59.456 [2024-07-15 13:18:56.157153] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.714 [2024-07-15 13:18:56.295785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.714 [2024-07-15 13:18:56.394470] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.714 [2024-07-15 13:18:56.394528] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.714 [2024-07-15 13:18:56.394540] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.714 [2024-07-15 13:18:56.394548] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.714 [2024-07-15 13:18:56.394556] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.714 [2024-07-15 13:18:56.394582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.647 13:18:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:00.647 13:18:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:00.647 13:18:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:00.647 13:18:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:00.647 13:18:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.647 13:18:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.647 13:18:57 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.lTia0AerDv 00:20:00.647 13:18:57 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lTia0AerDv 00:20:00.647 13:18:57 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:00.903 [2024-07-15 13:18:57.478092] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.903 13:18:57 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:01.160 13:18:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:01.419 [2024-07-15 13:18:58.082249] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:01.419 [2024-07-15 13:18:58.082510] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.419 13:18:58 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:01.678 malloc0 00:20:01.935 13:18:58 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:02.192 13:18:58 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lTia0AerDv 00:20:02.450 [2024-07-15 13:18:58.993555] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:02.450 13:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lTia0AerDv 00:20:02.450 13:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:02.450 13:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:02.450 13:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:02.450 13:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lTia0AerDv' 00:20:02.450 13:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:02.450 13:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99673 00:20:02.450 13:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:02.450 13:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:02.450 13:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99673 /var/tmp/bdevperf.sock 00:20:02.450 13:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99673 ']' 00:20:02.450 13:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:02.450 13:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:02.450 13:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:02.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:02.450 13:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:02.450 13:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.450 [2024-07-15 13:18:59.079146] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:02.450 [2024-07-15 13:18:59.079586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99673 ] 00:20:02.707 [2024-07-15 13:18:59.250980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.707 [2024-07-15 13:18:59.357062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.641 13:19:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:03.641 13:19:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:03.641 13:19:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lTia0AerDv 00:20:03.641 [2024-07-15 13:19:00.376608] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:03.641 [2024-07-15 13:19:00.376759] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:03.900 TLSTESTn1 00:20:03.900 13:19:00 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:03.900 Running I/O for 10 seconds... 00:20:13.877 00:20:13.877 Latency(us) 00:20:13.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.877 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:13.877 Verification LBA range: start 0x0 length 0x2000 00:20:13.877 TLSTESTn1 : 10.02 3657.53 14.29 0.00 0.00 34928.46 7238.75 32648.84 00:20:13.877 =================================================================================================================== 00:20:13.877 Total : 3657.53 14.29 0.00 0.00 34928.46 7238.75 32648.84 00:20:13.877 0 00:20:14.136 13:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:14.136 13:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 99673 00:20:14.136 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99673 ']' 00:20:14.136 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99673 00:20:14.136 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:14.136 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:14.136 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99673 00:20:14.136 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:14.136 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:14.136 killing process with pid 99673 00:20:14.136 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99673' 00:20:14.136 Received shutdown signal, test time was about 10.000000 seconds 00:20:14.136 00:20:14.136 Latency(us) 00:20:14.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.136 =================================================================================================================== 00:20:14.136 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:14.136 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99673 00:20:14.136 [2024-07-15 13:19:10.652902] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:14.136 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99673 00:20:14.136 13:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.lTia0AerDv 00:20:14.136 13:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lTia0AerDv 00:20:14.136 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:14.136 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lTia0AerDv 00:20:14.136 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:14.136 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:14.136 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:14.393 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:14.393 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lTia0AerDv 00:20:14.393 13:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:14.393 13:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:14.393 13:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:14.393 13:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lTia0AerDv' 00:20:14.393 13:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:14.393 13:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99825 00:20:14.393 13:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:14.393 13:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:14.393 13:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99825 /var/tmp/bdevperf.sock 00:20:14.393 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99825 ']' 00:20:14.393 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.393 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:14.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.393 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.393 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:14.393 13:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.393 [2024-07-15 13:19:10.932966] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:14.393 [2024-07-15 13:19:10.933072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99825 ] 00:20:14.393 [2024-07-15 13:19:11.072914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.651 [2024-07-15 13:19:11.167667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.585 13:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:15.585 13:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:15.585 13:19:11 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lTia0AerDv 00:20:15.585 [2024-07-15 13:19:12.230498] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:15.585 [2024-07-15 13:19:12.230588] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:15.585 [2024-07-15 13:19:12.230601] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.lTia0AerDv 00:20:15.585 2024/07/15 13:19:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.lTia0AerDv subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:20:15.585 request: 00:20:15.585 { 00:20:15.585 "method": "bdev_nvme_attach_controller", 00:20:15.585 "params": { 00:20:15.585 "name": "TLSTEST", 00:20:15.585 "trtype": "tcp", 00:20:15.585 "traddr": "10.0.0.2", 00:20:15.585 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:15.585 "adrfam": "ipv4", 00:20:15.585 "trsvcid": "4420", 00:20:15.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.585 "psk": "/tmp/tmp.lTia0AerDv" 00:20:15.585 } 00:20:15.585 } 00:20:15.585 Got JSON-RPC error response 00:20:15.585 GoRPCClient: error on JSON-RPC call 00:20:15.585 13:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99825 00:20:15.585 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99825 ']' 00:20:15.585 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99825 00:20:15.585 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:15.585 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:15.585 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99825 00:20:15.585 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:15.585 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:15.585 killing process with pid 99825 00:20:15.585 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99825' 00:20:15.585 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99825 00:20:15.585 Received shutdown signal, test time was about 10.000000 seconds 00:20:15.585 00:20:15.585 Latency(us) 00:20:15.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.585 =================================================================================================================== 00:20:15.585 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:15.585 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99825 00:20:15.843 13:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:15.843 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:15.843 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:15.843 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:15.843 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:15.843 13:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 99571 00:20:15.843 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99571 ']' 00:20:15.843 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99571 00:20:15.843 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:15.843 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:15.843 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99571 00:20:15.843 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:15.843 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:15.843 killing process with pid 99571 00:20:15.843 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99571' 00:20:15.843 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99571 00:20:15.843 [2024-07-15 13:19:12.525356] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:15.843 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99571 00:20:16.101 13:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:16.101 13:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:16.101 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:16.101 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.101 13:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:16.101 13:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=99871 00:20:16.101 13:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 99871 00:20:16.101 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99871 ']' 00:20:16.101 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.101 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:16.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.101 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.101 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:16.101 13:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.101 [2024-07-15 13:19:12.809600] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:16.101 [2024-07-15 13:19:12.809711] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.358 [2024-07-15 13:19:12.944796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.358 [2024-07-15 13:19:13.037576] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.358 [2024-07-15 13:19:13.037636] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.358 [2024-07-15 13:19:13.037648] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.358 [2024-07-15 13:19:13.037657] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.358 [2024-07-15 13:19:13.037664] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.358 [2024-07-15 13:19:13.037690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.291 13:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:17.291 13:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:17.291 13:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:17.292 13:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:17.292 13:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.292 13:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.292 13:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.lTia0AerDv 00:20:17.292 13:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:17.292 13:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.lTia0AerDv 00:20:17.292 13:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:20:17.292 13:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:17.292 13:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:20:17.292 13:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:17.292 13:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.lTia0AerDv 00:20:17.292 13:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lTia0AerDv 00:20:17.292 13:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:17.549 [2024-07-15 13:19:14.132593] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.549 13:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:17.806 13:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:18.063 [2024-07-15 13:19:14.660717] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:18.063 [2024-07-15 13:19:14.661012] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.063 13:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:18.320 malloc0 00:20:18.320 13:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:18.886 13:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lTia0AerDv 00:20:18.886 [2024-07-15 13:19:15.624047] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:18.886 [2024-07-15 13:19:15.624102] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:18.886 [2024-07-15 13:19:15.624146] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:19.143 2024/07/15 13:19:15 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.lTia0AerDv], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:20:19.143 request: 00:20:19.143 { 00:20:19.143 "method": "nvmf_subsystem_add_host", 00:20:19.143 "params": { 00:20:19.143 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.143 "host": "nqn.2016-06.io.spdk:host1", 00:20:19.143 "psk": "/tmp/tmp.lTia0AerDv" 00:20:19.143 } 00:20:19.143 } 00:20:19.143 Got JSON-RPC error response 00:20:19.143 GoRPCClient: error on JSON-RPC call 00:20:19.143 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:19.143 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:19.143 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:19.143 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:19.143 13:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 99871 00:20:19.143 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99871 ']' 00:20:19.143 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99871 00:20:19.143 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:19.143 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:19.143 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99871 00:20:19.143 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:19.143 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:19.143 killing process with pid 99871 00:20:19.143 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99871' 00:20:19.143 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99871 00:20:19.143 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99871 00:20:19.401 13:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.lTia0AerDv 00:20:19.401 13:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:19.401 13:19:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:19.401 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:19.401 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.401 13:19:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=99993 00:20:19.401 13:19:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:19.401 13:19:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 99993 00:20:19.401 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99993 ']' 00:20:19.401 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.401 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:19.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.401 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.401 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:19.401 13:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.401 [2024-07-15 13:19:15.974593] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:19.401 [2024-07-15 13:19:15.974723] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.401 [2024-07-15 13:19:16.113046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.658 [2024-07-15 13:19:16.219603] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.658 [2024-07-15 13:19:16.219670] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.658 [2024-07-15 13:19:16.219684] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.658 [2024-07-15 13:19:16.219694] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.658 [2024-07-15 13:19:16.219703] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.658 [2024-07-15 13:19:16.219732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.618 13:19:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:20.618 13:19:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:20.618 13:19:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:20.618 13:19:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:20.618 13:19:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.619 13:19:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.619 13:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.lTia0AerDv 00:20:20.619 13:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lTia0AerDv 00:20:20.619 13:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:20.619 [2024-07-15 13:19:17.333266] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.619 13:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:20.876 13:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:21.133 [2024-07-15 13:19:17.869415] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.133 [2024-07-15 13:19:17.869639] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.390 13:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:21.647 malloc0 00:20:21.647 13:19:18 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:21.905 13:19:18 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lTia0AerDv 00:20:22.164 [2024-07-15 13:19:18.768656] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:22.164 13:19:18 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=100090 00:20:22.164 13:19:18 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:22.164 13:19:18 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:22.164 13:19:18 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 100090 /var/tmp/bdevperf.sock 00:20:22.164 13:19:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100090 ']' 00:20:22.164 13:19:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:22.164 13:19:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:22.164 13:19:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:22.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:22.164 13:19:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:22.164 13:19:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.164 [2024-07-15 13:19:18.833606] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:22.164 [2024-07-15 13:19:18.833699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100090 ] 00:20:22.422 [2024-07-15 13:19:18.968329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.422 [2024-07-15 13:19:19.065037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.356 13:19:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:23.356 13:19:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:23.356 13:19:19 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lTia0AerDv 00:20:23.356 [2024-07-15 13:19:20.091734] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.356 [2024-07-15 13:19:20.091859] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:23.613 TLSTESTn1 00:20:23.613 13:19:20 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:20:23.872 13:19:20 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:23.872 "subsystems": [ 00:20:23.872 { 00:20:23.872 "subsystem": "keyring", 00:20:23.872 "config": [] 00:20:23.872 }, 00:20:23.872 { 00:20:23.872 "subsystem": "iobuf", 00:20:23.872 "config": [ 00:20:23.872 { 00:20:23.872 "method": "iobuf_set_options", 00:20:23.872 "params": { 00:20:23.872 "large_bufsize": 135168, 00:20:23.872 "large_pool_count": 1024, 00:20:23.872 "small_bufsize": 8192, 00:20:23.872 "small_pool_count": 8192 00:20:23.872 } 00:20:23.872 } 00:20:23.872 ] 00:20:23.872 }, 00:20:23.872 { 00:20:23.872 "subsystem": "sock", 00:20:23.872 "config": [ 00:20:23.872 { 00:20:23.872 "method": "sock_set_default_impl", 00:20:23.872 "params": { 00:20:23.872 "impl_name": "posix" 00:20:23.872 } 00:20:23.872 }, 00:20:23.872 { 00:20:23.872 "method": "sock_impl_set_options", 00:20:23.872 "params": { 00:20:23.872 "enable_ktls": false, 00:20:23.872 "enable_placement_id": 0, 00:20:23.872 "enable_quickack": false, 00:20:23.872 "enable_recv_pipe": true, 00:20:23.872 "enable_zerocopy_send_client": false, 00:20:23.872 "enable_zerocopy_send_server": true, 00:20:23.872 "impl_name": "ssl", 00:20:23.872 "recv_buf_size": 4096, 00:20:23.872 "send_buf_size": 4096, 00:20:23.872 "tls_version": 0, 00:20:23.872 "zerocopy_threshold": 0 00:20:23.872 } 00:20:23.872 }, 00:20:23.872 { 00:20:23.872 "method": "sock_impl_set_options", 00:20:23.872 "params": { 00:20:23.872 "enable_ktls": false, 00:20:23.872 "enable_placement_id": 0, 00:20:23.872 "enable_quickack": false, 00:20:23.872 "enable_recv_pipe": true, 00:20:23.872 "enable_zerocopy_send_client": false, 00:20:23.872 "enable_zerocopy_send_server": true, 00:20:23.872 "impl_name": "posix", 00:20:23.872 "recv_buf_size": 2097152, 00:20:23.872 "send_buf_size": 2097152, 00:20:23.872 "tls_version": 0, 00:20:23.872 "zerocopy_threshold": 0 00:20:23.872 } 00:20:23.872 } 00:20:23.872 ] 00:20:23.872 }, 00:20:23.872 { 00:20:23.872 "subsystem": "vmd", 00:20:23.872 "config": [] 00:20:23.872 }, 00:20:23.872 { 00:20:23.872 "subsystem": "accel", 00:20:23.872 "config": [ 00:20:23.872 { 00:20:23.872 "method": "accel_set_options", 00:20:23.872 "params": { 00:20:23.872 "buf_count": 2048, 00:20:23.872 "large_cache_size": 16, 00:20:23.872 "sequence_count": 2048, 00:20:23.872 "small_cache_size": 128, 00:20:23.872 "task_count": 2048 00:20:23.872 } 00:20:23.872 } 00:20:23.872 ] 00:20:23.872 }, 00:20:23.872 { 00:20:23.872 "subsystem": "bdev", 00:20:23.873 "config": [ 00:20:23.873 { 00:20:23.873 "method": "bdev_set_options", 00:20:23.873 "params": { 00:20:23.873 "bdev_auto_examine": true, 00:20:23.873 "bdev_io_cache_size": 256, 00:20:23.873 "bdev_io_pool_size": 65535, 00:20:23.873 "iobuf_large_cache_size": 16, 00:20:23.873 "iobuf_small_cache_size": 128 00:20:23.873 } 00:20:23.873 }, 00:20:23.873 { 00:20:23.873 "method": "bdev_raid_set_options", 00:20:23.873 "params": { 00:20:23.873 "process_window_size_kb": 1024 00:20:23.873 } 00:20:23.873 }, 00:20:23.873 { 00:20:23.873 "method": "bdev_iscsi_set_options", 00:20:23.873 "params": { 00:20:23.873 "timeout_sec": 30 00:20:23.873 } 00:20:23.873 }, 00:20:23.873 { 00:20:23.873 "method": "bdev_nvme_set_options", 00:20:23.873 "params": { 00:20:23.873 "action_on_timeout": "none", 00:20:23.873 "allow_accel_sequence": false, 00:20:23.873 "arbitration_burst": 0, 00:20:23.873 "bdev_retry_count": 3, 00:20:23.873 "ctrlr_loss_timeout_sec": 0, 00:20:23.873 "delay_cmd_submit": true, 00:20:23.873 "dhchap_dhgroups": [ 00:20:23.873 "null", 00:20:23.873 "ffdhe2048", 00:20:23.873 "ffdhe3072", 00:20:23.873 "ffdhe4096", 00:20:23.873 "ffdhe6144", 00:20:23.873 "ffdhe8192" 00:20:23.873 ], 00:20:23.873 "dhchap_digests": [ 00:20:23.873 "sha256", 00:20:23.873 "sha384", 00:20:23.873 "sha512" 00:20:23.873 ], 00:20:23.873 "disable_auto_failback": false, 00:20:23.873 "fast_io_fail_timeout_sec": 0, 00:20:23.873 "generate_uuids": false, 00:20:23.873 "high_priority_weight": 0, 00:20:23.873 "io_path_stat": false, 00:20:23.873 "io_queue_requests": 0, 00:20:23.873 "keep_alive_timeout_ms": 10000, 00:20:23.873 "low_priority_weight": 0, 00:20:23.873 "medium_priority_weight": 0, 00:20:23.873 "nvme_adminq_poll_period_us": 10000, 00:20:23.873 "nvme_error_stat": false, 00:20:23.873 "nvme_ioq_poll_period_us": 0, 00:20:23.873 "rdma_cm_event_timeout_ms": 0, 00:20:23.873 "rdma_max_cq_size": 0, 00:20:23.873 "rdma_srq_size": 0, 00:20:23.873 "reconnect_delay_sec": 0, 00:20:23.873 "timeout_admin_us": 0, 00:20:23.873 "timeout_us": 0, 00:20:23.873 "transport_ack_timeout": 0, 00:20:23.873 "transport_retry_count": 4, 00:20:23.873 "transport_tos": 0 00:20:23.873 } 00:20:23.873 }, 00:20:23.873 { 00:20:23.873 "method": "bdev_nvme_set_hotplug", 00:20:23.873 "params": { 00:20:23.873 "enable": false, 00:20:23.873 "period_us": 100000 00:20:23.873 } 00:20:23.873 }, 00:20:23.873 { 00:20:23.873 "method": "bdev_malloc_create", 00:20:23.873 "params": { 00:20:23.873 "block_size": 4096, 00:20:23.873 "name": "malloc0", 00:20:23.873 "num_blocks": 8192, 00:20:23.873 "optimal_io_boundary": 0, 00:20:23.873 "physical_block_size": 4096, 00:20:23.873 "uuid": "c0cd46a3-eeea-452b-9d97-10821ccc52ee" 00:20:23.873 } 00:20:23.873 }, 00:20:23.873 { 00:20:23.873 "method": "bdev_wait_for_examine" 00:20:23.873 } 00:20:23.873 ] 00:20:23.873 }, 00:20:23.873 { 00:20:23.873 "subsystem": "nbd", 00:20:23.873 "config": [] 00:20:23.873 }, 00:20:23.873 { 00:20:23.873 "subsystem": "scheduler", 00:20:23.873 "config": [ 00:20:23.873 { 00:20:23.873 "method": "framework_set_scheduler", 00:20:23.873 "params": { 00:20:23.873 "name": "static" 00:20:23.873 } 00:20:23.873 } 00:20:23.873 ] 00:20:23.873 }, 00:20:23.873 { 00:20:23.873 "subsystem": "nvmf", 00:20:23.873 "config": [ 00:20:23.873 { 00:20:23.873 "method": "nvmf_set_config", 00:20:23.873 "params": { 00:20:23.873 "admin_cmd_passthru": { 00:20:23.873 "identify_ctrlr": false 00:20:23.873 }, 00:20:23.873 "discovery_filter": "match_any" 00:20:23.873 } 00:20:23.873 }, 00:20:23.873 { 00:20:23.873 "method": "nvmf_set_max_subsystems", 00:20:23.873 "params": { 00:20:23.873 "max_subsystems": 1024 00:20:23.873 } 00:20:23.873 }, 00:20:23.873 { 00:20:23.873 "method": "nvmf_set_crdt", 00:20:23.873 "params": { 00:20:23.873 "crdt1": 0, 00:20:23.873 "crdt2": 0, 00:20:23.873 "crdt3": 0 00:20:23.873 } 00:20:23.873 }, 00:20:23.873 { 00:20:23.873 "method": "nvmf_create_transport", 00:20:23.873 "params": { 00:20:23.873 "abort_timeout_sec": 1, 00:20:23.873 "ack_timeout": 0, 00:20:23.873 "buf_cache_size": 4294967295, 00:20:23.873 "c2h_success": false, 00:20:23.873 "data_wr_pool_size": 0, 00:20:23.873 "dif_insert_or_strip": false, 00:20:23.873 "in_capsule_data_size": 4096, 00:20:23.873 "io_unit_size": 131072, 00:20:23.873 "max_aq_depth": 128, 00:20:23.873 "max_io_qpairs_per_ctrlr": 127, 00:20:23.873 "max_io_size": 131072, 00:20:23.873 "max_queue_depth": 128, 00:20:23.873 "num_shared_buffers": 511, 00:20:23.873 "sock_priority": 0, 00:20:23.873 "trtype": "TCP", 00:20:23.873 "zcopy": false 00:20:23.873 } 00:20:23.873 }, 00:20:23.873 { 00:20:23.873 "method": "nvmf_create_subsystem", 00:20:23.873 "params": { 00:20:23.873 "allow_any_host": false, 00:20:23.873 "ana_reporting": false, 00:20:23.873 "max_cntlid": 65519, 00:20:23.873 "max_namespaces": 10, 00:20:23.873 "min_cntlid": 1, 00:20:23.873 "model_number": "SPDK bdev Controller", 00:20:23.873 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.873 "serial_number": "SPDK00000000000001" 00:20:23.873 } 00:20:23.873 }, 00:20:23.873 { 00:20:23.873 "method": "nvmf_subsystem_add_host", 00:20:23.873 "params": { 00:20:23.873 "host": "nqn.2016-06.io.spdk:host1", 00:20:23.873 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.873 "psk": "/tmp/tmp.lTia0AerDv" 00:20:23.873 } 00:20:23.873 }, 00:20:23.873 { 00:20:23.873 "method": "nvmf_subsystem_add_ns", 00:20:23.873 "params": { 00:20:23.873 "namespace": { 00:20:23.873 "bdev_name": "malloc0", 00:20:23.873 "nguid": "C0CD46A3EEEA452B9D9710821CCC52EE", 00:20:23.873 "no_auto_visible": false, 00:20:23.873 "nsid": 1, 00:20:23.873 "uuid": "c0cd46a3-eeea-452b-9d97-10821ccc52ee" 00:20:23.873 }, 00:20:23.873 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:20:23.873 } 00:20:23.873 }, 00:20:23.873 { 00:20:23.873 "method": "nvmf_subsystem_add_listener", 00:20:23.873 "params": { 00:20:23.873 "listen_address": { 00:20:23.873 "adrfam": "IPv4", 00:20:23.873 "traddr": "10.0.0.2", 00:20:23.873 "trsvcid": "4420", 00:20:23.873 "trtype": "TCP" 00:20:23.873 }, 00:20:23.873 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.873 "secure_channel": true 00:20:23.873 } 00:20:23.873 } 00:20:23.873 ] 00:20:23.873 } 00:20:23.873 ] 00:20:23.873 }' 00:20:23.873 13:19:20 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:24.132 13:19:20 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:24.132 "subsystems": [ 00:20:24.132 { 00:20:24.132 "subsystem": "keyring", 00:20:24.132 "config": [] 00:20:24.132 }, 00:20:24.132 { 00:20:24.132 "subsystem": "iobuf", 00:20:24.132 "config": [ 00:20:24.132 { 00:20:24.132 "method": "iobuf_set_options", 00:20:24.132 "params": { 00:20:24.132 "large_bufsize": 135168, 00:20:24.132 "large_pool_count": 1024, 00:20:24.132 "small_bufsize": 8192, 00:20:24.132 "small_pool_count": 8192 00:20:24.132 } 00:20:24.132 } 00:20:24.132 ] 00:20:24.132 }, 00:20:24.132 { 00:20:24.132 "subsystem": "sock", 00:20:24.132 "config": [ 00:20:24.132 { 00:20:24.132 "method": "sock_set_default_impl", 00:20:24.132 "params": { 00:20:24.132 "impl_name": "posix" 00:20:24.132 } 00:20:24.132 }, 00:20:24.132 { 00:20:24.132 "method": "sock_impl_set_options", 00:20:24.132 "params": { 00:20:24.132 "enable_ktls": false, 00:20:24.132 "enable_placement_id": 0, 00:20:24.132 "enable_quickack": false, 00:20:24.132 "enable_recv_pipe": true, 00:20:24.132 "enable_zerocopy_send_client": false, 00:20:24.132 "enable_zerocopy_send_server": true, 00:20:24.132 "impl_name": "ssl", 00:20:24.132 "recv_buf_size": 4096, 00:20:24.132 "send_buf_size": 4096, 00:20:24.132 "tls_version": 0, 00:20:24.132 "zerocopy_threshold": 0 00:20:24.132 } 00:20:24.132 }, 00:20:24.132 { 00:20:24.132 "method": "sock_impl_set_options", 00:20:24.132 "params": { 00:20:24.132 "enable_ktls": false, 00:20:24.132 "enable_placement_id": 0, 00:20:24.132 "enable_quickack": false, 00:20:24.132 "enable_recv_pipe": true, 00:20:24.132 "enable_zerocopy_send_client": false, 00:20:24.132 "enable_zerocopy_send_server": true, 00:20:24.132 "impl_name": "posix", 00:20:24.132 "recv_buf_size": 2097152, 00:20:24.132 "send_buf_size": 2097152, 00:20:24.132 "tls_version": 0, 00:20:24.132 "zerocopy_threshold": 0 00:20:24.132 } 00:20:24.132 } 00:20:24.132 ] 00:20:24.132 }, 00:20:24.132 { 00:20:24.132 "subsystem": "vmd", 00:20:24.132 "config": [] 00:20:24.132 }, 00:20:24.132 { 00:20:24.132 "subsystem": "accel", 00:20:24.132 "config": [ 00:20:24.132 { 00:20:24.132 "method": "accel_set_options", 00:20:24.132 "params": { 00:20:24.132 "buf_count": 2048, 00:20:24.132 "large_cache_size": 16, 00:20:24.132 "sequence_count": 2048, 00:20:24.132 "small_cache_size": 128, 00:20:24.132 "task_count": 2048 00:20:24.132 } 00:20:24.132 } 00:20:24.132 ] 00:20:24.132 }, 00:20:24.132 { 00:20:24.132 "subsystem": "bdev", 00:20:24.132 "config": [ 00:20:24.132 { 00:20:24.132 "method": "bdev_set_options", 00:20:24.132 "params": { 00:20:24.132 "bdev_auto_examine": true, 00:20:24.132 "bdev_io_cache_size": 256, 00:20:24.132 "bdev_io_pool_size": 65535, 00:20:24.132 "iobuf_large_cache_size": 16, 00:20:24.132 "iobuf_small_cache_size": 128 00:20:24.132 } 00:20:24.132 }, 00:20:24.132 { 00:20:24.132 "method": "bdev_raid_set_options", 00:20:24.132 "params": { 00:20:24.132 "process_window_size_kb": 1024 00:20:24.132 } 00:20:24.132 }, 00:20:24.132 { 00:20:24.132 "method": "bdev_iscsi_set_options", 00:20:24.132 "params": { 00:20:24.132 "timeout_sec": 30 00:20:24.132 } 00:20:24.132 }, 00:20:24.132 { 00:20:24.132 "method": "bdev_nvme_set_options", 00:20:24.132 "params": { 00:20:24.132 "action_on_timeout": "none", 00:20:24.132 "allow_accel_sequence": false, 00:20:24.132 "arbitration_burst": 0, 00:20:24.132 "bdev_retry_count": 3, 00:20:24.132 "ctrlr_loss_timeout_sec": 0, 00:20:24.132 "delay_cmd_submit": true, 00:20:24.132 "dhchap_dhgroups": [ 00:20:24.132 "null", 00:20:24.132 "ffdhe2048", 00:20:24.132 "ffdhe3072", 00:20:24.132 "ffdhe4096", 00:20:24.132 "ffdhe6144", 00:20:24.132 "ffdhe8192" 00:20:24.132 ], 00:20:24.132 "dhchap_digests": [ 00:20:24.132 "sha256", 00:20:24.132 "sha384", 00:20:24.132 "sha512" 00:20:24.132 ], 00:20:24.132 "disable_auto_failback": false, 00:20:24.132 "fast_io_fail_timeout_sec": 0, 00:20:24.132 "generate_uuids": false, 00:20:24.132 "high_priority_weight": 0, 00:20:24.132 "io_path_stat": false, 00:20:24.132 "io_queue_requests": 512, 00:20:24.132 "keep_alive_timeout_ms": 10000, 00:20:24.132 "low_priority_weight": 0, 00:20:24.132 "medium_priority_weight": 0, 00:20:24.132 "nvme_adminq_poll_period_us": 10000, 00:20:24.132 "nvme_error_stat": false, 00:20:24.132 "nvme_ioq_poll_period_us": 0, 00:20:24.132 "rdma_cm_event_timeout_ms": 0, 00:20:24.132 "rdma_max_cq_size": 0, 00:20:24.132 "rdma_srq_size": 0, 00:20:24.132 "reconnect_delay_sec": 0, 00:20:24.132 "timeout_admin_us": 0, 00:20:24.132 "timeout_us": 0, 00:20:24.132 "transport_ack_timeout": 0, 00:20:24.132 "transport_retry_count": 4, 00:20:24.132 "transport_tos": 0 00:20:24.132 } 00:20:24.132 }, 00:20:24.132 { 00:20:24.132 "method": "bdev_nvme_attach_controller", 00:20:24.132 "params": { 00:20:24.132 "adrfam": "IPv4", 00:20:24.132 "ctrlr_loss_timeout_sec": 0, 00:20:24.132 "ddgst": false, 00:20:24.132 "fast_io_fail_timeout_sec": 0, 00:20:24.132 "hdgst": false, 00:20:24.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:24.132 "name": "TLSTEST", 00:20:24.132 "prchk_guard": false, 00:20:24.132 "prchk_reftag": false, 00:20:24.132 "psk": "/tmp/tmp.lTia0AerDv", 00:20:24.132 "reconnect_delay_sec": 0, 00:20:24.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.132 "traddr": "10.0.0.2", 00:20:24.132 "trsvcid": "4420", 00:20:24.132 "trtype": "TCP" 00:20:24.132 } 00:20:24.132 }, 00:20:24.132 { 00:20:24.132 "method": "bdev_nvme_set_hotplug", 00:20:24.132 "params": { 00:20:24.132 "enable": false, 00:20:24.132 "period_us": 100000 00:20:24.132 } 00:20:24.132 }, 00:20:24.132 { 00:20:24.132 "method": "bdev_wait_for_examine" 00:20:24.132 } 00:20:24.132 ] 00:20:24.132 }, 00:20:24.132 { 00:20:24.132 "subsystem": "nbd", 00:20:24.132 "config": [] 00:20:24.132 } 00:20:24.132 ] 00:20:24.132 }' 00:20:24.132 13:19:20 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 100090 00:20:24.132 13:19:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100090 ']' 00:20:24.132 13:19:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100090 00:20:24.132 13:19:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:24.132 13:19:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:24.132 13:19:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100090 00:20:24.390 13:19:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:24.390 13:19:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:24.390 killing process with pid 100090 00:20:24.390 13:19:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100090' 00:20:24.390 13:19:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100090 00:20:24.390 Received shutdown signal, test time was about 10.000000 seconds 00:20:24.390 00:20:24.390 Latency(us) 00:20:24.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.390 =================================================================================================================== 00:20:24.390 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:24.390 [2024-07-15 13:19:20.874271] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:24.390 13:19:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100090 00:20:24.390 13:19:21 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 99993 00:20:24.390 13:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99993 ']' 00:20:24.390 13:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99993 00:20:24.390 13:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:24.390 13:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:24.390 13:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99993 00:20:24.390 13:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:24.390 killing process with pid 99993 00:20:24.390 13:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:24.390 13:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99993' 00:20:24.390 13:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99993 00:20:24.390 [2024-07-15 13:19:21.115960] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:24.390 13:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99993 00:20:24.648 13:19:21 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:24.648 13:19:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:24.648 13:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:24.648 13:19:21 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:24.648 "subsystems": [ 00:20:24.648 { 00:20:24.648 "subsystem": "keyring", 00:20:24.648 "config": [] 00:20:24.648 }, 00:20:24.648 { 00:20:24.648 "subsystem": "iobuf", 00:20:24.648 "config": [ 00:20:24.648 { 00:20:24.648 "method": "iobuf_set_options", 00:20:24.648 "params": { 00:20:24.648 "large_bufsize": 135168, 00:20:24.648 "large_pool_count": 1024, 00:20:24.648 "small_bufsize": 8192, 00:20:24.648 "small_pool_count": 8192 00:20:24.648 } 00:20:24.648 } 00:20:24.648 ] 00:20:24.648 }, 00:20:24.648 { 00:20:24.648 "subsystem": "sock", 00:20:24.648 "config": [ 00:20:24.648 { 00:20:24.648 "method": "sock_set_default_impl", 00:20:24.648 "params": { 00:20:24.648 "impl_name": "posix" 00:20:24.648 } 00:20:24.648 }, 00:20:24.648 { 00:20:24.648 "method": "sock_impl_set_options", 00:20:24.648 "params": { 00:20:24.648 "enable_ktls": false, 00:20:24.648 "enable_placement_id": 0, 00:20:24.648 "enable_quickack": false, 00:20:24.648 "enable_recv_pipe": true, 00:20:24.648 "enable_zerocopy_send_client": false, 00:20:24.648 "enable_zerocopy_send_server": true, 00:20:24.648 "impl_name": "ssl", 00:20:24.648 "recv_buf_size": 4096, 00:20:24.648 "send_buf_size": 4096, 00:20:24.648 "tls_version": 0, 00:20:24.648 "zerocopy_threshold": 0 00:20:24.648 } 00:20:24.648 }, 00:20:24.648 { 00:20:24.648 "method": "sock_impl_set_options", 00:20:24.648 "params": { 00:20:24.648 "enable_ktls": false, 00:20:24.648 "enable_placement_id": 0, 00:20:24.648 "enable_quickack": false, 00:20:24.648 "enable_recv_pipe": true, 00:20:24.648 "enable_zerocopy_send_client": false, 00:20:24.648 "enable_zerocopy_send_server": true, 00:20:24.648 "impl_name": "posix", 00:20:24.648 "recv_buf_size": 2097152, 00:20:24.648 "send_buf_size": 2097152, 00:20:24.648 "tls_version": 0, 00:20:24.648 "zerocopy_threshold": 0 00:20:24.648 } 00:20:24.648 } 00:20:24.648 ] 00:20:24.648 }, 00:20:24.648 { 00:20:24.648 "subsystem": "vmd", 00:20:24.648 "config": [] 00:20:24.648 }, 00:20:24.648 { 00:20:24.648 "subsystem": "accel", 00:20:24.648 "config": [ 00:20:24.648 { 00:20:24.648 "method": "accel_set_options", 00:20:24.648 "params": { 00:20:24.648 "buf_count": 2048, 00:20:24.648 "large_cache_size": 16, 00:20:24.648 "sequence_count": 2048, 00:20:24.648 "small_cache_size": 128, 00:20:24.648 "task_count": 2048 00:20:24.648 } 00:20:24.648 } 00:20:24.648 ] 00:20:24.648 }, 00:20:24.648 { 00:20:24.648 "subsystem": "bdev", 00:20:24.648 "config": [ 00:20:24.648 { 00:20:24.648 "method": "bdev_set_options", 00:20:24.648 "params": { 00:20:24.648 "bdev_auto_examine": true, 00:20:24.648 "bdev_io_cache_size": 256, 00:20:24.648 "bdev_io_pool_size": 65535, 00:20:24.648 "iobuf_large_cache_size": 16, 00:20:24.648 "iobuf_small_cache_size": 128 00:20:24.648 } 00:20:24.648 }, 00:20:24.648 { 00:20:24.648 "method": "bdev_raid_set_options", 00:20:24.648 "params": { 00:20:24.648 "process_window_size_kb": 1024 00:20:24.648 } 00:20:24.648 }, 00:20:24.648 { 00:20:24.648 "method": "bdev_iscsi_set_options", 00:20:24.648 "params": { 00:20:24.648 "timeout_sec": 30 00:20:24.648 } 00:20:24.648 }, 00:20:24.648 { 00:20:24.648 "method": "bdev_nvme_set_options", 00:20:24.648 "params": { 00:20:24.648 "action_on_timeout": "none", 00:20:24.648 "allow_accel_sequence": false, 00:20:24.648 "arbitration_burst": 0, 00:20:24.648 "bdev_retry_count": 3, 00:20:24.648 "ctrlr_loss_timeout_sec": 0, 00:20:24.648 "delay_cmd_submit": true, 00:20:24.648 "dhchap_dhgroups": [ 00:20:24.648 "null", 00:20:24.648 "ffdhe2048", 00:20:24.648 "ffdhe3072", 00:20:24.648 "ffdhe4096", 00:20:24.648 "ffdhe6144", 00:20:24.648 "ffdhe8192" 00:20:24.648 ], 00:20:24.648 "dhchap_digests": [ 00:20:24.648 "sha256", 00:20:24.648 "sha384", 00:20:24.648 "sha512" 00:20:24.648 ], 00:20:24.648 "disable_auto_failback": false, 00:20:24.648 "fast_io_fail_timeout_sec": 0, 00:20:24.648 "generate_uuids": false, 00:20:24.648 "high_priority_weight": 0, 00:20:24.648 "io_path_stat": false, 00:20:24.648 "io_queue_requests": 0, 00:20:24.648 "keep_alive_timeout_ms": 10000, 00:20:24.648 "low_priority_weight": 0, 00:20:24.648 "medium_priority_weight": 0, 00:20:24.648 "nvme_adminq_poll_period_us": 10000, 00:20:24.648 "nvme_error_stat": false, 00:20:24.648 "nvme_ioq_poll_period_us": 0, 00:20:24.648 "rdma_cm_event_timeout_ms": 0, 00:20:24.648 "rdma_max_cq_size": 0, 00:20:24.648 "rdma_srq_size": 0, 00:20:24.648 "reconnect_delay_sec": 0, 00:20:24.648 "timeout_admin_us": 0, 00:20:24.648 "timeout_us": 0, 00:20:24.648 "transport_ack_timeout": 0, 00:20:24.648 "transport_retry_count": 4, 00:20:24.648 "transport_tos": 0 00:20:24.648 } 00:20:24.648 }, 00:20:24.648 { 00:20:24.648 "method": "bdev_nvme_set_hotplug", 00:20:24.648 "params": { 00:20:24.648 "enable": false, 00:20:24.648 "period_us": 100000 00:20:24.648 } 00:20:24.648 }, 00:20:24.649 { 00:20:24.649 "method": "bdev_malloc_create", 00:20:24.649 "params": { 00:20:24.649 "block_size": 4096, 00:20:24.649 "name": "malloc0", 00:20:24.649 "num_blocks": 8192, 00:20:24.649 "optimal_io_boundary": 0, 00:20:24.649 "physical_block_size": 4096, 00:20:24.649 "uuid": "c0cd46a3-eeea-452b-9d97-10821ccc52ee" 00:20:24.649 } 00:20:24.649 }, 00:20:24.649 { 00:20:24.649 "method": "bdev_wait_for_examine" 00:20:24.649 } 00:20:24.649 ] 00:20:24.649 }, 00:20:24.649 { 00:20:24.649 "subsystem": "nbd", 00:20:24.649 "config": [] 00:20:24.649 }, 00:20:24.649 { 00:20:24.649 "subsystem": "scheduler", 00:20:24.649 "config": [ 00:20:24.649 { 00:20:24.649 "method": "framework_set_scheduler", 00:20:24.649 "params": { 00:20:24.649 "name": "static" 00:20:24.649 } 00:20:24.649 } 00:20:24.649 ] 00:20:24.649 }, 00:20:24.649 { 00:20:24.649 "subsystem": "nvmf", 00:20:24.649 "config": [ 00:20:24.649 { 00:20:24.649 "method": "nvmf_set_config", 00:20:24.649 "params": { 00:20:24.649 "admin_cmd_passthru": { 00:20:24.649 "identify_ctrlr": false 00:20:24.649 }, 00:20:24.649 "discovery_filter": "match_any" 00:20:24.649 } 00:20:24.649 }, 00:20:24.649 { 00:20:24.649 "method": "nvmf_set_max_subsystems", 00:20:24.649 "params": { 00:20:24.649 "max_subsystems": 1024 00:20:24.649 } 00:20:24.649 }, 00:20:24.649 { 00:20:24.649 "method": "nvmf_set_crdt", 00:20:24.649 "params": { 00:20:24.649 "crdt1": 0, 00:20:24.649 "crdt2": 0, 00:20:24.649 "crdt3": 0 00:20:24.649 } 00:20:24.649 }, 00:20:24.649 { 00:20:24.649 "method": "nvmf_create_transport", 00:20:24.649 "params": { 00:20:24.649 "abort_timeout_sec": 1, 00:20:24.649 "ack_timeout": 0, 00:20:24.649 "buf_cache_size": 4294967295, 00:20:24.649 "c2h_success": false, 00:20:24.649 "data_wr_pool_size": 0, 00:20:24.649 "dif_insert_or_strip": false, 00:20:24.649 "in_capsule_data_size": 4096, 00:20:24.649 "io_unit_size": 131072, 00:20:24.649 "max_aq_depth": 128, 00:20:24.649 "max_io_qpairs_per_ctrlr": 127, 00:20:24.649 "max_io_size": 131072, 00:20:24.649 "max_queue_depth": 128, 00:20:24.649 "num_shared_buffers": 511, 00:20:24.649 "sock_priority": 0, 00:20:24.649 "trtype": "TCP", 00:20:24.649 "zcopy": false 00:20:24.649 } 00:20:24.649 }, 00:20:24.649 { 00:20:24.649 "method": "nvmf_create_subsystem", 00:20:24.649 "params": { 00:20:24.649 "allow_any_host": false, 00:20:24.649 "ana_reporting": false, 00:20:24.649 "max_cntlid": 65519, 00:20:24.649 "max_namespaces": 10, 00:20:24.649 "min_cntlid": 1, 00:20:24.649 "model_number": "SPDK bdev Controller", 00:20:24.649 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.649 "serial_number": "SPDK00000000000001" 00:20:24.649 } 00:20:24.649 }, 00:20:24.649 { 00:20:24.649 "method": "nvmf_subsystem_add_host", 00:20:24.649 "params": { 00:20:24.649 "host": "nqn.2016-06.io.spdk:host1", 00:20:24.649 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.649 "psk": "/tmp/tmp.lTia0AerDv" 00:20:24.649 } 00:20:24.649 }, 00:20:24.649 { 00:20:24.649 "method": "nvmf_subsystem_add_ns", 00:20:24.649 "params": { 00:20:24.649 "namespace": { 00:20:24.649 "bdev_name": "malloc0", 00:20:24.649 "nguid": "C0CD46A3EEEA452B9D9710821CCC52EE", 00:20:24.649 "no_auto_visible": false, 00:20:24.649 "nsid": 1, 00:20:24.649 "uuid": "c0cd46a3-eeea-452b-9d97-10821ccc52ee" 00:20:24.649 }, 00:20:24.649 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:20:24.649 } 00:20:24.649 }, 00:20:24.649 { 00:20:24.649 "method": "nvmf_subsystem_add_listener", 00:20:24.649 "params": { 00:20:24.649 "listen_address": { 00:20:24.649 "adrfam": "IPv4", 00:20:24.649 "traddr": "10.0.0.2", 00:20:24.649 "trsvcid": "4420", 00:20:24.649 "trtype": "TCP" 00:20:24.649 }, 00:20:24.649 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.649 "secure_channel": true 00:20:24.649 } 00:20:24.649 } 00:20:24.649 ] 00:20:24.649 } 00:20:24.649 ] 00:20:24.649 }' 00:20:24.649 13:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.649 13:19:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100169 00:20:24.649 13:19:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:24.649 13:19:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100169 00:20:24.649 13:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100169 ']' 00:20:24.649 13:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.649 13:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:24.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.649 13:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.649 13:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:24.649 13:19:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.907 [2024-07-15 13:19:21.407202] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:24.907 [2024-07-15 13:19:21.407326] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.907 [2024-07-15 13:19:21.541489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.164 [2024-07-15 13:19:21.647553] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.164 [2024-07-15 13:19:21.647631] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.164 [2024-07-15 13:19:21.647643] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.164 [2024-07-15 13:19:21.647651] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.164 [2024-07-15 13:19:21.647659] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.164 [2024-07-15 13:19:21.647779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.164 [2024-07-15 13:19:21.878033] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.164 [2024-07-15 13:19:21.893978] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:25.422 [2024-07-15 13:19:21.909998] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:25.422 [2024-07-15 13:19:21.910272] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.680 13:19:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:25.680 13:19:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:25.680 13:19:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:25.680 13:19:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:25.680 13:19:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.680 13:19:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.680 13:19:22 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=100213 00:20:25.680 13:19:22 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 100213 /var/tmp/bdevperf.sock 00:20:25.680 13:19:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100213 ']' 00:20:25.680 13:19:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.680 13:19:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:25.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.680 13:19:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.680 13:19:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:25.680 13:19:22 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:25.680 13:19:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.680 13:19:22 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:25.680 "subsystems": [ 00:20:25.680 { 00:20:25.680 "subsystem": "keyring", 00:20:25.680 "config": [] 00:20:25.680 }, 00:20:25.680 { 00:20:25.680 "subsystem": "iobuf", 00:20:25.680 "config": [ 00:20:25.680 { 00:20:25.680 "method": "iobuf_set_options", 00:20:25.680 "params": { 00:20:25.680 "large_bufsize": 135168, 00:20:25.680 "large_pool_count": 1024, 00:20:25.680 "small_bufsize": 8192, 00:20:25.680 "small_pool_count": 8192 00:20:25.680 } 00:20:25.680 } 00:20:25.680 ] 00:20:25.680 }, 00:20:25.680 { 00:20:25.680 "subsystem": "sock", 00:20:25.680 "config": [ 00:20:25.680 { 00:20:25.680 "method": "sock_set_default_impl", 00:20:25.680 "params": { 00:20:25.680 "impl_name": "posix" 00:20:25.680 } 00:20:25.680 }, 00:20:25.680 { 00:20:25.680 "method": "sock_impl_set_options", 00:20:25.680 "params": { 00:20:25.680 "enable_ktls": false, 00:20:25.680 "enable_placement_id": 0, 00:20:25.680 "enable_quickack": false, 00:20:25.680 "enable_recv_pipe": true, 00:20:25.680 "enable_zerocopy_send_client": false, 00:20:25.680 "enable_zerocopy_send_server": true, 00:20:25.680 "impl_name": "ssl", 00:20:25.680 "recv_buf_size": 4096, 00:20:25.680 "send_buf_size": 4096, 00:20:25.680 "tls_version": 0, 00:20:25.680 "zerocopy_threshold": 0 00:20:25.680 } 00:20:25.680 }, 00:20:25.680 { 00:20:25.680 "method": "sock_impl_set_options", 00:20:25.680 "params": { 00:20:25.680 "enable_ktls": false, 00:20:25.680 "enable_placement_id": 0, 00:20:25.680 "enable_quickack": false, 00:20:25.680 "enable_recv_pipe": true, 00:20:25.680 "enable_zerocopy_send_client": false, 00:20:25.680 "enable_zerocopy_send_server": true, 00:20:25.680 "impl_name": "posix", 00:20:25.680 "recv_buf_size": 2097152, 00:20:25.680 "send_buf_size": 2097152, 00:20:25.680 "tls_version": 0, 00:20:25.680 "zerocopy_threshold": 0 00:20:25.680 } 00:20:25.680 } 00:20:25.680 ] 00:20:25.680 }, 00:20:25.680 { 00:20:25.680 "subsystem": "vmd", 00:20:25.680 "config": [] 00:20:25.680 }, 00:20:25.680 { 00:20:25.680 "subsystem": "accel", 00:20:25.680 "config": [ 00:20:25.680 { 00:20:25.680 "method": "accel_set_options", 00:20:25.680 "params": { 00:20:25.680 "buf_count": 2048, 00:20:25.680 "large_cache_size": 16, 00:20:25.681 "sequence_count": 2048, 00:20:25.681 "small_cache_size": 128, 00:20:25.681 "task_count": 2048 00:20:25.681 } 00:20:25.681 } 00:20:25.681 ] 00:20:25.681 }, 00:20:25.681 { 00:20:25.681 "subsystem": "bdev", 00:20:25.681 "config": [ 00:20:25.681 { 00:20:25.681 "method": "bdev_set_options", 00:20:25.681 "params": { 00:20:25.681 "bdev_auto_examine": true, 00:20:25.681 "bdev_io_cache_size": 256, 00:20:25.681 "bdev_io_pool_size": 65535, 00:20:25.681 "iobuf_large_cache_size": 16, 00:20:25.681 "iobuf_small_cache_size": 128 00:20:25.681 } 00:20:25.681 }, 00:20:25.681 { 00:20:25.681 "method": "bdev_raid_set_options", 00:20:25.681 "params": { 00:20:25.681 "process_window_size_kb": 1024 00:20:25.681 } 00:20:25.681 }, 00:20:25.681 { 00:20:25.681 "method": "bdev_iscsi_set_options", 00:20:25.681 "params": { 00:20:25.681 "timeout_sec": 30 00:20:25.681 } 00:20:25.681 }, 00:20:25.681 { 00:20:25.681 "method": "bdev_nvme_set_options", 00:20:25.681 "params": { 00:20:25.681 "action_on_timeout": "none", 00:20:25.681 "allow_accel_sequence": false, 00:20:25.681 "arbitration_burst": 0, 00:20:25.681 "bdev_retry_count": 3, 00:20:25.681 "ctrlr_loss_timeout_sec": 0, 00:20:25.681 "delay_cmd_submit": true, 00:20:25.681 "dhchap_dhgroups": [ 00:20:25.681 "null", 00:20:25.681 "ffdhe2048", 00:20:25.681 "ffdhe3072", 00:20:25.681 "ffdhe4096", 00:20:25.681 "ffdhe6144", 00:20:25.681 "ffdhe8192" 00:20:25.681 ], 00:20:25.681 "dhchap_digests": [ 00:20:25.681 "sha256", 00:20:25.681 "sha384", 00:20:25.681 "sha512" 00:20:25.681 ], 00:20:25.681 "disable_auto_failback": false, 00:20:25.681 "fast_io_fail_timeout_sec": 0, 00:20:25.681 "generate_uuids": false, 00:20:25.681 "high_priority_weight": 0, 00:20:25.681 "io_path_stat": false, 00:20:25.681 "io_queue_requests": 512, 00:20:25.681 "keep_alive_timeout_ms": 10000, 00:20:25.681 "low_priority_weight": 0, 00:20:25.681 "medium_priority_weight": 0, 00:20:25.681 "nvme_adminq_poll_period_us": 10000, 00:20:25.681 "nvme_error_stat": false, 00:20:25.681 "nvme_ioq_poll_period_us": 0, 00:20:25.681 "rdma_cm_event_timeout_ms": 0, 00:20:25.681 "rdma_max_cq_size": 0, 00:20:25.681 "rdma_srq_size": 0, 00:20:25.681 "reconnect_delay_sec": 0, 00:20:25.681 "timeout_admin_us": 0, 00:20:25.681 "timeout_us": 0, 00:20:25.681 "transport_ack_timeout": 0, 00:20:25.681 "transport_retry_count": 4, 00:20:25.681 "transport_tos": 0 00:20:25.681 } 00:20:25.681 }, 00:20:25.681 { 00:20:25.681 "method": "bdev_nvme_attach_controller", 00:20:25.681 "params": { 00:20:25.681 "adrfam": "IPv4", 00:20:25.681 "ctrlr_loss_timeout_sec": 0, 00:20:25.681 "ddgst": false, 00:20:25.681 "fast_io_fail_timeout_sec": 0, 00:20:25.681 "hdgst": false, 00:20:25.681 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.681 "name": "TLSTEST", 00:20:25.681 "prchk_guard": false, 00:20:25.681 "prchk_reftag": false, 00:20:25.681 "psk": "/tmp/tmp.lTia0AerDv", 00:20:25.681 "reconnect_delay_sec": 0, 00:20:25.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.681 "traddr": "10.0.0.2", 00:20:25.681 "trsvcid": "4420", 00:20:25.681 "trtype": "TCP" 00:20:25.681 } 00:20:25.681 }, 00:20:25.681 { 00:20:25.681 "method": "bdev_nvme_set_hotplug", 00:20:25.681 "params": { 00:20:25.681 "enable": false, 00:20:25.681 "period_us": 100000 00:20:25.681 } 00:20:25.681 }, 00:20:25.681 { 00:20:25.681 "method": "bdev_wait_for_examine" 00:20:25.681 } 00:20:25.681 ] 00:20:25.681 }, 00:20:25.681 { 00:20:25.681 "subsystem": "nbd", 00:20:25.681 "config": [] 00:20:25.681 } 00:20:25.681 ] 00:20:25.681 }' 00:20:25.939 [2024-07-15 13:19:22.477566] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:25.939 [2024-07-15 13:19:22.478414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100213 ] 00:20:25.939 [2024-07-15 13:19:22.622667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.197 [2024-07-15 13:19:22.730689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.197 [2024-07-15 13:19:22.896525] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.197 [2024-07-15 13:19:22.896647] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:27.131 13:19:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:27.131 13:19:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:27.131 13:19:23 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:27.131 Running I/O for 10 seconds... 00:20:37.097 00:20:37.097 Latency(us) 00:20:37.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.097 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:37.097 Verification LBA range: start 0x0 length 0x2000 00:20:37.097 TLSTESTn1 : 10.02 3792.79 14.82 0.00 0.00 33674.83 7447.27 29312.47 00:20:37.097 =================================================================================================================== 00:20:37.097 Total : 3792.79 14.82 0.00 0.00 33674.83 7447.27 29312.47 00:20:37.097 0 00:20:37.097 13:19:33 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:37.097 13:19:33 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 100213 00:20:37.097 13:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100213 ']' 00:20:37.097 13:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100213 00:20:37.097 13:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:37.097 13:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:37.097 13:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100213 00:20:37.097 13:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:37.097 13:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:37.097 13:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100213' 00:20:37.097 killing process with pid 100213 00:20:37.097 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.097 00:20:37.097 Latency(us) 00:20:37.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.097 =================================================================================================================== 00:20:37.097 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:37.097 13:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100213 00:20:37.097 [2024-07-15 13:19:33.680934] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:37.097 13:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100213 00:20:37.353 13:19:33 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 100169 00:20:37.353 13:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100169 ']' 00:20:37.353 13:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100169 00:20:37.353 13:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:37.353 13:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:37.353 13:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100169 00:20:37.353 13:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:37.353 13:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:37.353 killing process with pid 100169 00:20:37.353 13:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100169' 00:20:37.353 13:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100169 00:20:37.353 [2024-07-15 13:19:33.924418] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:37.353 13:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100169 00:20:37.610 13:19:34 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:37.610 13:19:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:37.610 13:19:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:37.610 13:19:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.610 13:19:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100355 00:20:37.610 13:19:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:37.610 13:19:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100355 00:20:37.610 13:19:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100355 ']' 00:20:37.610 13:19:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.610 13:19:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:37.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.610 13:19:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.610 13:19:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:37.610 13:19:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.610 [2024-07-15 13:19:34.212673] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:37.610 [2024-07-15 13:19:34.212790] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.610 [2024-07-15 13:19:34.347373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.867 [2024-07-15 13:19:34.493056] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.867 [2024-07-15 13:19:34.493128] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.867 [2024-07-15 13:19:34.493143] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.867 [2024-07-15 13:19:34.493155] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.867 [2024-07-15 13:19:34.493164] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.867 [2024-07-15 13:19:34.493227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.799 13:19:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:38.799 13:19:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:38.799 13:19:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:38.799 13:19:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.799 13:19:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.799 13:19:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.799 13:19:35 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.lTia0AerDv 00:20:38.799 13:19:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lTia0AerDv 00:20:38.799 13:19:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:39.056 [2024-07-15 13:19:35.609314] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.056 13:19:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:39.313 13:19:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:39.570 [2024-07-15 13:19:36.181548] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:39.570 [2024-07-15 13:19:36.181917] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.570 13:19:36 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:39.828 malloc0 00:20:39.828 13:19:36 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:40.085 13:19:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lTia0AerDv 00:20:40.649 [2024-07-15 13:19:37.126239] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:40.649 13:19:37 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=100463 00:20:40.649 13:19:37 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:40.649 13:19:37 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:40.649 13:19:37 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 100463 /var/tmp/bdevperf.sock 00:20:40.649 13:19:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100463 ']' 00:20:40.649 13:19:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.649 13:19:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:40.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.649 13:19:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.649 13:19:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:40.649 13:19:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.649 [2024-07-15 13:19:37.203028] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:40.649 [2024-07-15 13:19:37.203142] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100463 ] 00:20:40.649 [2024-07-15 13:19:37.335656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.906 [2024-07-15 13:19:37.442350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.839 13:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:41.839 13:19:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:41.839 13:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lTia0AerDv 00:20:41.839 13:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:42.096 [2024-07-15 13:19:38.807729] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:42.354 nvme0n1 00:20:42.354 13:19:38 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:42.354 Running I/O for 1 seconds... 00:20:43.287 00:20:43.287 Latency(us) 00:20:43.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.287 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:43.287 Verification LBA range: start 0x0 length 0x2000 00:20:43.287 nvme0n1 : 1.02 3787.65 14.80 0.00 0.00 33316.60 3589.59 23712.12 00:20:43.287 =================================================================================================================== 00:20:43.287 Total : 3787.65 14.80 0.00 0.00 33316.60 3589.59 23712.12 00:20:43.287 0 00:20:43.544 13:19:40 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 100463 00:20:43.544 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100463 ']' 00:20:43.544 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100463 00:20:43.544 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:43.544 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:43.544 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100463 00:20:43.544 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:43.544 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:43.544 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100463' 00:20:43.544 killing process with pid 100463 00:20:43.544 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100463 00:20:43.544 Received shutdown signal, test time was about 1.000000 seconds 00:20:43.544 00:20:43.544 Latency(us) 00:20:43.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.544 =================================================================================================================== 00:20:43.544 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:43.544 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100463 00:20:43.801 13:19:40 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 100355 00:20:43.801 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100355 ']' 00:20:43.801 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100355 00:20:43.801 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:43.801 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:43.801 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100355 00:20:43.801 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:43.801 killing process with pid 100355 00:20:43.801 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:43.801 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100355' 00:20:43.801 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100355 00:20:43.801 [2024-07-15 13:19:40.317366] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:43.801 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100355 00:20:44.059 13:19:40 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:20:44.059 13:19:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:44.059 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:44.059 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.059 13:19:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100539 00:20:44.059 13:19:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:44.059 13:19:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100539 00:20:44.059 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100539 ']' 00:20:44.059 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.059 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:44.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.059 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.059 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:44.059 13:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.059 [2024-07-15 13:19:40.631121] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:44.059 [2024-07-15 13:19:40.631315] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.059 [2024-07-15 13:19:40.778119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.317 [2024-07-15 13:19:40.885757] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.317 [2024-07-15 13:19:40.885825] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.317 [2024-07-15 13:19:40.885840] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.317 [2024-07-15 13:19:40.885851] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.317 [2024-07-15 13:19:40.885860] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.317 [2024-07-15 13:19:40.885892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.250 13:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:45.250 13:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:45.250 13:19:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:45.250 13:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:45.250 13:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.250 13:19:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.250 13:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:20:45.250 13:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.250 13:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.250 [2024-07-15 13:19:41.728384] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.250 malloc0 00:20:45.250 [2024-07-15 13:19:41.759533] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:45.250 [2024-07-15 13:19:41.759763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.250 13:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.250 13:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=100589 00:20:45.250 13:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:45.250 13:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 100589 /var/tmp/bdevperf.sock 00:20:45.250 13:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100589 ']' 00:20:45.251 13:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.251 13:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:45.251 13:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.251 13:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:45.251 13:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.251 [2024-07-15 13:19:41.856959] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:45.251 [2024-07-15 13:19:41.857101] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100589 ] 00:20:45.578 [2024-07-15 13:19:41.996716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.578 [2024-07-15 13:19:42.111946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.509 13:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:46.509 13:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:46.509 13:19:42 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lTia0AerDv 00:20:46.766 13:19:43 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:46.766 [2024-07-15 13:19:43.497454] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:47.023 nvme0n1 00:20:47.023 13:19:43 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:47.023 Running I/O for 1 seconds... 00:20:48.394 00:20:48.394 Latency(us) 00:20:48.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.394 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:48.394 Verification LBA range: start 0x0 length 0x2000 00:20:48.394 nvme0n1 : 1.02 3906.74 15.26 0.00 0.00 32421.96 7000.44 38130.04 00:20:48.394 =================================================================================================================== 00:20:48.394 Total : 3906.74 15.26 0.00 0.00 32421.96 7000.44 38130.04 00:20:48.394 0 00:20:48.394 13:19:44 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:20:48.394 13:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.394 13:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.394 13:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.394 13:19:44 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:20:48.394 "subsystems": [ 00:20:48.394 { 00:20:48.394 "subsystem": "keyring", 00:20:48.394 "config": [ 00:20:48.394 { 00:20:48.394 "method": "keyring_file_add_key", 00:20:48.394 "params": { 00:20:48.394 "name": "key0", 00:20:48.394 "path": "/tmp/tmp.lTia0AerDv" 00:20:48.394 } 00:20:48.394 } 00:20:48.394 ] 00:20:48.394 }, 00:20:48.394 { 00:20:48.394 "subsystem": "iobuf", 00:20:48.394 "config": [ 00:20:48.394 { 00:20:48.394 "method": "iobuf_set_options", 00:20:48.394 "params": { 00:20:48.394 "large_bufsize": 135168, 00:20:48.394 "large_pool_count": 1024, 00:20:48.394 "small_bufsize": 8192, 00:20:48.394 "small_pool_count": 8192 00:20:48.394 } 00:20:48.394 } 00:20:48.394 ] 00:20:48.394 }, 00:20:48.394 { 00:20:48.394 "subsystem": "sock", 00:20:48.394 "config": [ 00:20:48.394 { 00:20:48.394 "method": "sock_set_default_impl", 00:20:48.394 "params": { 00:20:48.394 "impl_name": "posix" 00:20:48.394 } 00:20:48.394 }, 00:20:48.394 { 00:20:48.394 "method": "sock_impl_set_options", 00:20:48.394 "params": { 00:20:48.394 "enable_ktls": false, 00:20:48.394 "enable_placement_id": 0, 00:20:48.394 "enable_quickack": false, 00:20:48.394 "enable_recv_pipe": true, 00:20:48.394 "enable_zerocopy_send_client": false, 00:20:48.394 "enable_zerocopy_send_server": true, 00:20:48.394 "impl_name": "ssl", 00:20:48.394 "recv_buf_size": 4096, 00:20:48.394 "send_buf_size": 4096, 00:20:48.394 "tls_version": 0, 00:20:48.394 "zerocopy_threshold": 0 00:20:48.394 } 00:20:48.394 }, 00:20:48.394 { 00:20:48.394 "method": "sock_impl_set_options", 00:20:48.394 "params": { 00:20:48.394 "enable_ktls": false, 00:20:48.394 "enable_placement_id": 0, 00:20:48.394 "enable_quickack": false, 00:20:48.394 "enable_recv_pipe": true, 00:20:48.394 "enable_zerocopy_send_client": false, 00:20:48.394 "enable_zerocopy_send_server": true, 00:20:48.394 "impl_name": "posix", 00:20:48.394 "recv_buf_size": 2097152, 00:20:48.394 "send_buf_size": 2097152, 00:20:48.394 "tls_version": 0, 00:20:48.394 "zerocopy_threshold": 0 00:20:48.394 } 00:20:48.394 } 00:20:48.394 ] 00:20:48.394 }, 00:20:48.394 { 00:20:48.394 "subsystem": "vmd", 00:20:48.394 "config": [] 00:20:48.394 }, 00:20:48.394 { 00:20:48.394 "subsystem": "accel", 00:20:48.394 "config": [ 00:20:48.394 { 00:20:48.394 "method": "accel_set_options", 00:20:48.394 "params": { 00:20:48.394 "buf_count": 2048, 00:20:48.394 "large_cache_size": 16, 00:20:48.394 "sequence_count": 2048, 00:20:48.394 "small_cache_size": 128, 00:20:48.394 "task_count": 2048 00:20:48.394 } 00:20:48.394 } 00:20:48.394 ] 00:20:48.394 }, 00:20:48.394 { 00:20:48.394 "subsystem": "bdev", 00:20:48.394 "config": [ 00:20:48.394 { 00:20:48.394 "method": "bdev_set_options", 00:20:48.394 "params": { 00:20:48.394 "bdev_auto_examine": true, 00:20:48.394 "bdev_io_cache_size": 256, 00:20:48.394 "bdev_io_pool_size": 65535, 00:20:48.394 "iobuf_large_cache_size": 16, 00:20:48.394 "iobuf_small_cache_size": 128 00:20:48.394 } 00:20:48.394 }, 00:20:48.394 { 00:20:48.394 "method": "bdev_raid_set_options", 00:20:48.394 "params": { 00:20:48.394 "process_window_size_kb": 1024 00:20:48.394 } 00:20:48.394 }, 00:20:48.394 { 00:20:48.394 "method": "bdev_iscsi_set_options", 00:20:48.394 "params": { 00:20:48.394 "timeout_sec": 30 00:20:48.394 } 00:20:48.394 }, 00:20:48.394 { 00:20:48.394 "method": "bdev_nvme_set_options", 00:20:48.394 "params": { 00:20:48.394 "action_on_timeout": "none", 00:20:48.394 "allow_accel_sequence": false, 00:20:48.394 "arbitration_burst": 0, 00:20:48.394 "bdev_retry_count": 3, 00:20:48.394 "ctrlr_loss_timeout_sec": 0, 00:20:48.394 "delay_cmd_submit": true, 00:20:48.394 "dhchap_dhgroups": [ 00:20:48.394 "null", 00:20:48.394 "ffdhe2048", 00:20:48.394 "ffdhe3072", 00:20:48.394 "ffdhe4096", 00:20:48.394 "ffdhe6144", 00:20:48.394 "ffdhe8192" 00:20:48.394 ], 00:20:48.394 "dhchap_digests": [ 00:20:48.394 "sha256", 00:20:48.394 "sha384", 00:20:48.394 "sha512" 00:20:48.394 ], 00:20:48.394 "disable_auto_failback": false, 00:20:48.394 "fast_io_fail_timeout_sec": 0, 00:20:48.394 "generate_uuids": false, 00:20:48.394 "high_priority_weight": 0, 00:20:48.394 "io_path_stat": false, 00:20:48.394 "io_queue_requests": 0, 00:20:48.394 "keep_alive_timeout_ms": 10000, 00:20:48.394 "low_priority_weight": 0, 00:20:48.394 "medium_priority_weight": 0, 00:20:48.394 "nvme_adminq_poll_period_us": 10000, 00:20:48.394 "nvme_error_stat": false, 00:20:48.394 "nvme_ioq_poll_period_us": 0, 00:20:48.394 "rdma_cm_event_timeout_ms": 0, 00:20:48.394 "rdma_max_cq_size": 0, 00:20:48.394 "rdma_srq_size": 0, 00:20:48.394 "reconnect_delay_sec": 0, 00:20:48.394 "timeout_admin_us": 0, 00:20:48.394 "timeout_us": 0, 00:20:48.394 "transport_ack_timeout": 0, 00:20:48.394 "transport_retry_count": 4, 00:20:48.394 "transport_tos": 0 00:20:48.394 } 00:20:48.394 }, 00:20:48.394 { 00:20:48.395 "method": "bdev_nvme_set_hotplug", 00:20:48.395 "params": { 00:20:48.395 "enable": false, 00:20:48.395 "period_us": 100000 00:20:48.395 } 00:20:48.395 }, 00:20:48.395 { 00:20:48.395 "method": "bdev_malloc_create", 00:20:48.395 "params": { 00:20:48.395 "block_size": 4096, 00:20:48.395 "name": "malloc0", 00:20:48.395 "num_blocks": 8192, 00:20:48.395 "optimal_io_boundary": 0, 00:20:48.395 "physical_block_size": 4096, 00:20:48.395 "uuid": "13553fa1-e3e2-4d96-8222-64e3931ee956" 00:20:48.395 } 00:20:48.395 }, 00:20:48.395 { 00:20:48.395 "method": "bdev_wait_for_examine" 00:20:48.395 } 00:20:48.395 ] 00:20:48.395 }, 00:20:48.395 { 00:20:48.395 "subsystem": "nbd", 00:20:48.395 "config": [] 00:20:48.395 }, 00:20:48.395 { 00:20:48.395 "subsystem": "scheduler", 00:20:48.395 "config": [ 00:20:48.395 { 00:20:48.395 "method": "framework_set_scheduler", 00:20:48.395 "params": { 00:20:48.395 "name": "static" 00:20:48.395 } 00:20:48.395 } 00:20:48.395 ] 00:20:48.395 }, 00:20:48.395 { 00:20:48.395 "subsystem": "nvmf", 00:20:48.395 "config": [ 00:20:48.395 { 00:20:48.395 "method": "nvmf_set_config", 00:20:48.395 "params": { 00:20:48.395 "admin_cmd_passthru": { 00:20:48.395 "identify_ctrlr": false 00:20:48.395 }, 00:20:48.395 "discovery_filter": "match_any" 00:20:48.395 } 00:20:48.395 }, 00:20:48.395 { 00:20:48.395 "method": "nvmf_set_max_subsystems", 00:20:48.395 "params": { 00:20:48.395 "max_subsystems": 1024 00:20:48.395 } 00:20:48.395 }, 00:20:48.395 { 00:20:48.395 "method": "nvmf_set_crdt", 00:20:48.395 "params": { 00:20:48.395 "crdt1": 0, 00:20:48.395 "crdt2": 0, 00:20:48.395 "crdt3": 0 00:20:48.395 } 00:20:48.395 }, 00:20:48.395 { 00:20:48.395 "method": "nvmf_create_transport", 00:20:48.395 "params": { 00:20:48.395 "abort_timeout_sec": 1, 00:20:48.395 "ack_timeout": 0, 00:20:48.395 "buf_cache_size": 4294967295, 00:20:48.395 "c2h_success": false, 00:20:48.395 "data_wr_pool_size": 0, 00:20:48.395 "dif_insert_or_strip": false, 00:20:48.395 "in_capsule_data_size": 4096, 00:20:48.395 "io_unit_size": 131072, 00:20:48.395 "max_aq_depth": 128, 00:20:48.395 "max_io_qpairs_per_ctrlr": 127, 00:20:48.395 "max_io_size": 131072, 00:20:48.395 "max_queue_depth": 128, 00:20:48.395 "num_shared_buffers": 511, 00:20:48.395 "sock_priority": 0, 00:20:48.395 "trtype": "TCP", 00:20:48.395 "zcopy": false 00:20:48.395 } 00:20:48.395 }, 00:20:48.395 { 00:20:48.395 "method": "nvmf_create_subsystem", 00:20:48.395 "params": { 00:20:48.395 "allow_any_host": false, 00:20:48.395 "ana_reporting": false, 00:20:48.395 "max_cntlid": 65519, 00:20:48.395 "max_namespaces": 32, 00:20:48.395 "min_cntlid": 1, 00:20:48.395 "model_number": "SPDK bdev Controller", 00:20:48.395 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.395 "serial_number": "00000000000000000000" 00:20:48.395 } 00:20:48.395 }, 00:20:48.395 { 00:20:48.395 "method": "nvmf_subsystem_add_host", 00:20:48.395 "params": { 00:20:48.395 "host": "nqn.2016-06.io.spdk:host1", 00:20:48.395 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.395 "psk": "key0" 00:20:48.395 } 00:20:48.395 }, 00:20:48.395 { 00:20:48.395 "method": "nvmf_subsystem_add_ns", 00:20:48.395 "params": { 00:20:48.395 "namespace": { 00:20:48.395 "bdev_name": "malloc0", 00:20:48.395 "nguid": "13553FA1E3E24D96822264E3931EE956", 00:20:48.395 "no_auto_visible": false, 00:20:48.395 "nsid": 1, 00:20:48.395 "uuid": "13553fa1-e3e2-4d96-8222-64e3931ee956" 00:20:48.395 }, 00:20:48.395 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:20:48.395 } 00:20:48.395 }, 00:20:48.395 { 00:20:48.395 "method": "nvmf_subsystem_add_listener", 00:20:48.395 "params": { 00:20:48.395 "listen_address": { 00:20:48.395 "adrfam": "IPv4", 00:20:48.395 "traddr": "10.0.0.2", 00:20:48.395 "trsvcid": "4420", 00:20:48.395 "trtype": "TCP" 00:20:48.395 }, 00:20:48.395 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.395 "secure_channel": true 00:20:48.395 } 00:20:48.395 } 00:20:48.395 ] 00:20:48.395 } 00:20:48.395 ] 00:20:48.395 }' 00:20:48.395 13:19:44 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:48.653 13:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:20:48.653 "subsystems": [ 00:20:48.653 { 00:20:48.653 "subsystem": "keyring", 00:20:48.653 "config": [ 00:20:48.653 { 00:20:48.653 "method": "keyring_file_add_key", 00:20:48.653 "params": { 00:20:48.653 "name": "key0", 00:20:48.653 "path": "/tmp/tmp.lTia0AerDv" 00:20:48.653 } 00:20:48.653 } 00:20:48.653 ] 00:20:48.653 }, 00:20:48.653 { 00:20:48.653 "subsystem": "iobuf", 00:20:48.653 "config": [ 00:20:48.653 { 00:20:48.653 "method": "iobuf_set_options", 00:20:48.653 "params": { 00:20:48.653 "large_bufsize": 135168, 00:20:48.653 "large_pool_count": 1024, 00:20:48.653 "small_bufsize": 8192, 00:20:48.653 "small_pool_count": 8192 00:20:48.653 } 00:20:48.653 } 00:20:48.653 ] 00:20:48.653 }, 00:20:48.653 { 00:20:48.653 "subsystem": "sock", 00:20:48.653 "config": [ 00:20:48.653 { 00:20:48.653 "method": "sock_set_default_impl", 00:20:48.653 "params": { 00:20:48.653 "impl_name": "posix" 00:20:48.653 } 00:20:48.653 }, 00:20:48.653 { 00:20:48.653 "method": "sock_impl_set_options", 00:20:48.653 "params": { 00:20:48.653 "enable_ktls": false, 00:20:48.653 "enable_placement_id": 0, 00:20:48.653 "enable_quickack": false, 00:20:48.653 "enable_recv_pipe": true, 00:20:48.653 "enable_zerocopy_send_client": false, 00:20:48.653 "enable_zerocopy_send_server": true, 00:20:48.653 "impl_name": "ssl", 00:20:48.653 "recv_buf_size": 4096, 00:20:48.653 "send_buf_size": 4096, 00:20:48.653 "tls_version": 0, 00:20:48.653 "zerocopy_threshold": 0 00:20:48.653 } 00:20:48.653 }, 00:20:48.653 { 00:20:48.653 "method": "sock_impl_set_options", 00:20:48.653 "params": { 00:20:48.653 "enable_ktls": false, 00:20:48.653 "enable_placement_id": 0, 00:20:48.653 "enable_quickack": false, 00:20:48.653 "enable_recv_pipe": true, 00:20:48.653 "enable_zerocopy_send_client": false, 00:20:48.653 "enable_zerocopy_send_server": true, 00:20:48.653 "impl_name": "posix", 00:20:48.653 "recv_buf_size": 2097152, 00:20:48.653 "send_buf_size": 2097152, 00:20:48.653 "tls_version": 0, 00:20:48.653 "zerocopy_threshold": 0 00:20:48.653 } 00:20:48.653 } 00:20:48.653 ] 00:20:48.653 }, 00:20:48.653 { 00:20:48.653 "subsystem": "vmd", 00:20:48.653 "config": [] 00:20:48.653 }, 00:20:48.653 { 00:20:48.653 "subsystem": "accel", 00:20:48.653 "config": [ 00:20:48.653 { 00:20:48.653 "method": "accel_set_options", 00:20:48.653 "params": { 00:20:48.653 "buf_count": 2048, 00:20:48.653 "large_cache_size": 16, 00:20:48.653 "sequence_count": 2048, 00:20:48.653 "small_cache_size": 128, 00:20:48.653 "task_count": 2048 00:20:48.653 } 00:20:48.653 } 00:20:48.653 ] 00:20:48.653 }, 00:20:48.653 { 00:20:48.653 "subsystem": "bdev", 00:20:48.653 "config": [ 00:20:48.653 { 00:20:48.653 "method": "bdev_set_options", 00:20:48.653 "params": { 00:20:48.653 "bdev_auto_examine": true, 00:20:48.653 "bdev_io_cache_size": 256, 00:20:48.653 "bdev_io_pool_size": 65535, 00:20:48.653 "iobuf_large_cache_size": 16, 00:20:48.653 "iobuf_small_cache_size": 128 00:20:48.653 } 00:20:48.653 }, 00:20:48.653 { 00:20:48.653 "method": "bdev_raid_set_options", 00:20:48.653 "params": { 00:20:48.653 "process_window_size_kb": 1024 00:20:48.653 } 00:20:48.653 }, 00:20:48.653 { 00:20:48.653 "method": "bdev_iscsi_set_options", 00:20:48.653 "params": { 00:20:48.653 "timeout_sec": 30 00:20:48.653 } 00:20:48.653 }, 00:20:48.653 { 00:20:48.653 "method": "bdev_nvme_set_options", 00:20:48.653 "params": { 00:20:48.653 "action_on_timeout": "none", 00:20:48.653 "allow_accel_sequence": false, 00:20:48.653 "arbitration_burst": 0, 00:20:48.653 "bdev_retry_count": 3, 00:20:48.653 "ctrlr_loss_timeout_sec": 0, 00:20:48.653 "delay_cmd_submit": true, 00:20:48.653 "dhchap_dhgroups": [ 00:20:48.653 "null", 00:20:48.653 "ffdhe2048", 00:20:48.653 "ffdhe3072", 00:20:48.653 "ffdhe4096", 00:20:48.653 "ffdhe6144", 00:20:48.653 "ffdhe8192" 00:20:48.653 ], 00:20:48.653 "dhchap_digests": [ 00:20:48.653 "sha256", 00:20:48.653 "sha384", 00:20:48.653 "sha512" 00:20:48.653 ], 00:20:48.653 "disable_auto_failback": false, 00:20:48.653 "fast_io_fail_timeout_sec": 0, 00:20:48.653 "generate_uuids": false, 00:20:48.653 "high_priority_weight": 0, 00:20:48.653 "io_path_stat": false, 00:20:48.653 "io_queue_requests": 512, 00:20:48.653 "keep_alive_timeout_ms": 10000, 00:20:48.653 "low_priority_weight": 0, 00:20:48.653 "medium_priority_weight": 0, 00:20:48.653 "nvme_adminq_poll_period_us": 10000, 00:20:48.653 "nvme_error_stat": false, 00:20:48.653 "nvme_ioq_poll_period_us": 0, 00:20:48.653 "rdma_cm_event_timeout_ms": 0, 00:20:48.653 "rdma_max_cq_size": 0, 00:20:48.653 "rdma_srq_size": 0, 00:20:48.653 "reconnect_delay_sec": 0, 00:20:48.653 "timeout_admin_us": 0, 00:20:48.653 "timeout_us": 0, 00:20:48.653 "transport_ack_timeout": 0, 00:20:48.653 "transport_retry_count": 4, 00:20:48.653 "transport_tos": 0 00:20:48.653 } 00:20:48.653 }, 00:20:48.653 { 00:20:48.653 "method": "bdev_nvme_attach_controller", 00:20:48.653 "params": { 00:20:48.653 "adrfam": "IPv4", 00:20:48.653 "ctrlr_loss_timeout_sec": 0, 00:20:48.653 "ddgst": false, 00:20:48.653 "fast_io_fail_timeout_sec": 0, 00:20:48.653 "hdgst": false, 00:20:48.653 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:48.653 "name": "nvme0", 00:20:48.653 "prchk_guard": false, 00:20:48.653 "prchk_reftag": false, 00:20:48.653 "psk": "key0", 00:20:48.653 "reconnect_delay_sec": 0, 00:20:48.653 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.653 "traddr": "10.0.0.2", 00:20:48.653 "trsvcid": "4420", 00:20:48.653 "trtype": "TCP" 00:20:48.653 } 00:20:48.653 }, 00:20:48.653 { 00:20:48.653 "method": "bdev_nvme_set_hotplug", 00:20:48.653 "params": { 00:20:48.653 "enable": false, 00:20:48.653 "period_us": 100000 00:20:48.653 } 00:20:48.653 }, 00:20:48.653 { 00:20:48.653 "method": "bdev_enable_histogram", 00:20:48.653 "params": { 00:20:48.653 "enable": true, 00:20:48.653 "name": "nvme0n1" 00:20:48.653 } 00:20:48.653 }, 00:20:48.653 { 00:20:48.653 "method": "bdev_wait_for_examine" 00:20:48.653 } 00:20:48.653 ] 00:20:48.653 }, 00:20:48.653 { 00:20:48.653 "subsystem": "nbd", 00:20:48.653 "config": [] 00:20:48.653 } 00:20:48.653 ] 00:20:48.653 }' 00:20:48.653 13:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 100589 00:20:48.653 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100589 ']' 00:20:48.653 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100589 00:20:48.653 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:48.653 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:48.654 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100589 00:20:48.654 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:48.654 killing process with pid 100589 00:20:48.654 Received shutdown signal, test time was about 1.000000 seconds 00:20:48.654 00:20:48.654 Latency(us) 00:20:48.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.654 =================================================================================================================== 00:20:48.654 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:48.654 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:48.654 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100589' 00:20:48.654 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100589 00:20:48.654 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100589 00:20:48.911 13:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 100539 00:20:48.911 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100539 ']' 00:20:48.911 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100539 00:20:48.911 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:48.911 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:48.911 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100539 00:20:48.911 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:48.911 killing process with pid 100539 00:20:48.911 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:48.911 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100539' 00:20:48.911 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100539 00:20:48.911 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100539 00:20:49.168 13:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:20:49.168 13:19:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:49.168 13:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:20:49.168 "subsystems": [ 00:20:49.168 { 00:20:49.168 "subsystem": "keyring", 00:20:49.168 "config": [ 00:20:49.168 { 00:20:49.168 "method": "keyring_file_add_key", 00:20:49.168 "params": { 00:20:49.168 "name": "key0", 00:20:49.168 "path": "/tmp/tmp.lTia0AerDv" 00:20:49.168 } 00:20:49.168 } 00:20:49.168 ] 00:20:49.168 }, 00:20:49.168 { 00:20:49.168 "subsystem": "iobuf", 00:20:49.168 "config": [ 00:20:49.168 { 00:20:49.168 "method": "iobuf_set_options", 00:20:49.168 "params": { 00:20:49.168 "large_bufsize": 135168, 00:20:49.168 "large_pool_count": 1024, 00:20:49.168 "small_bufsize": 8192, 00:20:49.168 "small_pool_count": 8192 00:20:49.168 } 00:20:49.168 } 00:20:49.168 ] 00:20:49.168 }, 00:20:49.168 { 00:20:49.168 "subsystem": "sock", 00:20:49.168 "config": [ 00:20:49.168 { 00:20:49.168 "method": "sock_set_default_impl", 00:20:49.168 "params": { 00:20:49.168 "impl_name": "posix" 00:20:49.168 } 00:20:49.168 }, 00:20:49.168 { 00:20:49.168 "method": "sock_impl_set_options", 00:20:49.168 "params": { 00:20:49.168 "enable_ktls": false, 00:20:49.168 "enable_placement_id": 0, 00:20:49.168 "enable_quickack": false, 00:20:49.168 "enable_recv_pipe": true, 00:20:49.168 "enable_zerocopy_send_client": false, 00:20:49.168 "enable_zerocopy_send_server": true, 00:20:49.168 "impl_name": "ssl", 00:20:49.168 "recv_buf_size": 4096, 00:20:49.168 "send_buf_size": 4096, 00:20:49.168 "tls_version": 0, 00:20:49.168 "zerocopy_threshold": 0 00:20:49.168 } 00:20:49.168 }, 00:20:49.168 { 00:20:49.168 "method": "sock_impl_set_options", 00:20:49.168 "params": { 00:20:49.168 "enable_ktls": false, 00:20:49.168 "enable_placement_id": 0, 00:20:49.168 "enable_quickack": false, 00:20:49.168 "enable_recv_pipe": true, 00:20:49.168 "enable_zerocopy_send_client": false, 00:20:49.168 "enable_zerocopy_send_server": true, 00:20:49.168 "impl_name": "posix", 00:20:49.168 "recv_buf_size": 2097152, 00:20:49.168 "send_buf_size": 2097152, 00:20:49.168 "tls_version": 0, 00:20:49.168 "zerocopy_threshold": 0 00:20:49.168 } 00:20:49.168 } 00:20:49.168 ] 00:20:49.168 }, 00:20:49.168 { 00:20:49.168 "subsystem": "vmd", 00:20:49.168 "config": [] 00:20:49.168 }, 00:20:49.168 { 00:20:49.168 "subsystem": "accel", 00:20:49.168 "config": [ 00:20:49.168 { 00:20:49.168 "method": "accel_set_options", 00:20:49.168 "params": { 00:20:49.168 "buf_count": 2048, 00:20:49.168 "large_cache_size": 16, 00:20:49.168 "sequence_count": 2048, 00:20:49.168 "small_cache_size": 128, 00:20:49.168 "task_count": 2048 00:20:49.168 } 00:20:49.168 } 00:20:49.168 ] 00:20:49.168 }, 00:20:49.168 { 00:20:49.168 "subsystem": "bdev", 00:20:49.168 "config": [ 00:20:49.168 { 00:20:49.168 "method": "bdev_set_options", 00:20:49.168 "params": { 00:20:49.168 "bdev_auto_examine": true, 00:20:49.168 "bdev_io_cache_size": 256, 00:20:49.168 "bdev_io_pool_size": 65535, 00:20:49.168 "iobuf_large_cache_size": 16, 00:20:49.168 "iobuf_small_cache_size": 128 00:20:49.168 } 00:20:49.168 }, 00:20:49.168 { 00:20:49.168 "method": "bdev_raid_set_options", 00:20:49.168 "params": { 00:20:49.168 "process_window_size_kb": 1024 00:20:49.168 } 00:20:49.168 }, 00:20:49.168 { 00:20:49.168 "method": "bdev_iscsi_set_options", 00:20:49.168 "params": { 00:20:49.168 "timeout_sec": 30 00:20:49.168 } 00:20:49.168 }, 00:20:49.168 { 00:20:49.168 "method": "bdev_nvme_set_options", 00:20:49.168 "params": { 00:20:49.168 "action_on_timeout": "none", 00:20:49.168 "allow_accel_sequence": false, 00:20:49.168 "arbitration_burst": 0, 00:20:49.168 "bdev_retry_count": 3, 00:20:49.168 "ctrlr_loss_timeout_sec": 0, 00:20:49.168 "delay_cmd_submit": true, 00:20:49.168 "dhchap_dhgroups": [ 00:20:49.168 "null", 00:20:49.168 "ffdhe2048", 00:20:49.168 "ffdhe3072", 00:20:49.168 "ffdhe4096", 00:20:49.168 "ffdhe6144", 00:20:49.168 "ffdhe8192" 00:20:49.168 ], 00:20:49.168 "dhchap_digests": [ 00:20:49.168 "sha256", 00:20:49.168 "sha384", 00:20:49.168 "sha512" 00:20:49.168 ], 00:20:49.168 "disable_auto_failback": false, 00:20:49.168 "fast_io_fail_timeout_sec": 0, 00:20:49.168 "generate_uuids": false, 00:20:49.168 "high_priority_weight": 0, 00:20:49.168 "io_path_stat": false, 00:20:49.168 "io_queue_requests": 0, 00:20:49.168 "keep_alive_timeout_ms": 10000, 00:20:49.168 "low_priority_weight": 0, 00:20:49.168 "medium_priority_weight": 0, 00:20:49.168 "nvme_adminq_poll_period_us": 10000, 00:20:49.168 "nvme_error_stat": false, 00:20:49.168 "nvme_ioq_poll_period_us": 0, 00:20:49.168 "rdma_cm_event_timeout_ms": 0, 00:20:49.168 "rdma_max_cq_size": 0, 00:20:49.168 "rdma_srq_size": 0, 00:20:49.168 "reconnect_delay_sec": 0, 00:20:49.168 "timeout_admin_us": 0, 00:20:49.168 "timeout_us": 0, 00:20:49.168 "transport_ack_timeout": 0, 00:20:49.168 "transport_retry_count": 4, 00:20:49.168 "transport_tos": 0 00:20:49.168 } 00:20:49.168 }, 00:20:49.168 { 00:20:49.168 "method": "bdev_nvme_set_hotplug", 00:20:49.168 "params": { 00:20:49.168 "enable": false, 00:20:49.168 "period_us": 100000 00:20:49.168 } 00:20:49.168 }, 00:20:49.168 { 00:20:49.168 "method": "bdev_malloc_create", 00:20:49.168 "params": { 00:20:49.168 "block_size": 4096, 00:20:49.168 "name": "malloc0", 00:20:49.169 "num_blocks": 8192, 00:20:49.169 "optimal_io_boundary": 0, 00:20:49.169 "physical_block_size": 4096, 00:20:49.169 "uuid": "13553fa1-e3e2-4d96-8222-64e3931ee956" 00:20:49.169 } 00:20:49.169 }, 00:20:49.169 { 00:20:49.169 "method": "bdev_wait_for_examine" 00:20:49.169 } 00:20:49.169 ] 00:20:49.169 }, 00:20:49.169 { 00:20:49.169 "subsystem": "nbd", 00:20:49.169 "config": [] 00:20:49.169 }, 00:20:49.169 { 00:20:49.169 "subsystem": "scheduler", 00:20:49.169 "config": [ 00:20:49.169 { 00:20:49.169 "method": "framework_set_scheduler", 00:20:49.169 "params": { 00:20:49.169 "name": "static" 00:20:49.169 } 00:20:49.169 } 00:20:49.169 ] 00:20:49.169 }, 00:20:49.169 { 00:20:49.169 "subsystem": "nvmf", 00:20:49.169 "config": [ 00:20:49.169 { 00:20:49.169 "method": "nvmf_set_config", 00:20:49.169 "params": { 00:20:49.169 "admin_cmd_passthru": { 00:20:49.169 "identify_ctrlr": false 00:20:49.169 }, 00:20:49.169 "discovery_filter": "match_any" 00:20:49.169 } 00:20:49.169 }, 00:20:49.169 { 00:20:49.169 "method": "nvmf_set_max_subsystems", 00:20:49.169 "params": { 00:20:49.169 "max_subsystems": 1024 00:20:49.169 } 00:20:49.169 }, 00:20:49.169 { 00:20:49.169 "method": "nvmf_set_crdt", 00:20:49.169 "params": { 00:20:49.169 "crdt1": 0, 00:20:49.169 "crdt2": 0, 00:20:49.169 "crdt3": 0 00:20:49.169 } 00:20:49.169 }, 00:20:49.169 { 00:20:49.169 "method": "nvmf_create_transport", 00:20:49.169 "params": { 00:20:49.169 "abort_timeout_sec": 1, 00:20:49.169 "ack_timeout": 0, 00:20:49.169 "buf_cache_size": 4294967295, 00:20:49.169 "c2h_success": false, 00:20:49.169 "data_wr_pool_size": 0, 00:20:49.169 "dif_insert_or_strip": false, 00:20:49.169 "in_capsule_data_size": 4096, 00:20:49.169 "io_unit_size": 131072, 00:20:49.169 "max_aq_depth": 128, 00:20:49.169 "max_io_qpairs_per_ctrlr": 127, 00:20:49.169 "max_io_size": 131072, 00:20:49.169 "max_queue_depth": 128, 00:20:49.169 "num_shared_buffers": 511, 00:20:49.169 "sock_priority": 0, 00:20:49.169 "trtype": "TCP", 00:20:49.169 "zcopy": false 00:20:49.169 } 00:20:49.169 }, 00:20:49.169 { 00:20:49.169 "method": "nvmf_create_subsystem", 00:20:49.169 "params": { 00:20:49.169 "allow_any_host": false, 00:20:49.169 "ana_reporting": false, 00:20:49.169 "max_cntlid": 65519, 00:20:49.169 "max_namespaces": 32, 00:20:49.169 "min_cntlid": 1, 00:20:49.169 "model_number": "SPDK bdev Controller", 00:20:49.169 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.169 "serial_number": "00000000000000000000" 00:20:49.169 } 00:20:49.169 }, 00:20:49.169 { 00:20:49.169 "method": "nvmf_subsystem_add_host", 00:20:49.169 "params": { 00:20:49.169 "host": "nqn.2016-06.io.spdk:host1", 00:20:49.169 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.169 "psk": "key0" 00:20:49.169 } 00:20:49.169 }, 00:20:49.169 { 00:20:49.169 "method": "nvmf_subsystem_add_ns", 00:20:49.169 "params": { 00:20:49.169 "namespace": { 00:20:49.169 "bdev_name": "malloc0", 00:20:49.169 "nguid": "13553FA1E3E24D96822264E3931EE956", 00:20:49.169 "no_auto_visible": false, 00:20:49.169 "nsid": 1, 00:20:49.169 "uuid": "13553fa1-e3e2-4d96-8222-64e3931ee956" 00:20:49.169 }, 00:20:49.169 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:20:49.169 } 00:20:49.169 }, 00:20:49.169 { 00:20:49.169 "method": "nvmf_subsystem_add_listener", 00:20:49.169 "params": { 00:20:49.169 "listen_address": { 00:20:49.169 "adrfam": "IPv4", 00:20:49.169 "traddr": "10.0.0.2", 00:20:49.169 "trsvcid": "4420", 00:20:49.169 "trtype": "TCP" 00:20:49.169 }, 00:20:49.169 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.169 "secure_channel": true 00:20:49.169 } 00:20:49.169 } 00:20:49.169 ] 00:20:49.169 } 00:20:49.169 ] 00:20:49.169 }' 00:20:49.169 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:49.169 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.169 13:19:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100684 00:20:49.169 13:19:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:49.169 13:19:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100684 00:20:49.169 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100684 ']' 00:20:49.169 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.169 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:49.169 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.169 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:49.169 13:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.169 [2024-07-15 13:19:45.751407] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:49.169 [2024-07-15 13:19:45.751502] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.169 [2024-07-15 13:19:45.890877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.426 [2024-07-15 13:19:46.000880] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.426 [2024-07-15 13:19:46.000960] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.426 [2024-07-15 13:19:46.000975] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.426 [2024-07-15 13:19:46.000985] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.426 [2024-07-15 13:19:46.000994] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.426 [2024-07-15 13:19:46.001115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.684 [2024-07-15 13:19:46.246957] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.684 [2024-07-15 13:19:46.278860] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:49.684 [2024-07-15 13:19:46.279124] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.248 13:19:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:50.248 13:19:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:50.248 13:19:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:50.248 13:19:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:50.248 13:19:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.248 13:19:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.248 13:19:46 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=100724 00:20:50.248 13:19:46 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 100724 /var/tmp/bdevperf.sock 00:20:50.248 13:19:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100724 ']' 00:20:50.248 13:19:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:50.248 13:19:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:50.248 13:19:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:50.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:50.248 13:19:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:50.248 13:19:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.248 13:19:46 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:50.248 13:19:46 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:20:50.248 "subsystems": [ 00:20:50.248 { 00:20:50.248 "subsystem": "keyring", 00:20:50.248 "config": [ 00:20:50.248 { 00:20:50.248 "method": "keyring_file_add_key", 00:20:50.248 "params": { 00:20:50.248 "name": "key0", 00:20:50.248 "path": "/tmp/tmp.lTia0AerDv" 00:20:50.248 } 00:20:50.248 } 00:20:50.248 ] 00:20:50.248 }, 00:20:50.248 { 00:20:50.248 "subsystem": "iobuf", 00:20:50.248 "config": [ 00:20:50.248 { 00:20:50.248 "method": "iobuf_set_options", 00:20:50.248 "params": { 00:20:50.248 "large_bufsize": 135168, 00:20:50.248 "large_pool_count": 1024, 00:20:50.248 "small_bufsize": 8192, 00:20:50.248 "small_pool_count": 8192 00:20:50.248 } 00:20:50.248 } 00:20:50.248 ] 00:20:50.248 }, 00:20:50.248 { 00:20:50.248 "subsystem": "sock", 00:20:50.248 "config": [ 00:20:50.248 { 00:20:50.248 "method": "sock_set_default_impl", 00:20:50.248 "params": { 00:20:50.248 "impl_name": "posix" 00:20:50.248 } 00:20:50.248 }, 00:20:50.248 { 00:20:50.248 "method": "sock_impl_set_options", 00:20:50.248 "params": { 00:20:50.248 "enable_ktls": false, 00:20:50.248 "enable_placement_id": 0, 00:20:50.248 "enable_quickack": false, 00:20:50.248 "enable_recv_pipe": true, 00:20:50.248 "enable_zerocopy_send_client": false, 00:20:50.248 "enable_zerocopy_send_server": true, 00:20:50.248 "impl_name": "ssl", 00:20:50.248 "recv_buf_size": 4096, 00:20:50.248 "send_buf_size": 4096, 00:20:50.248 "tls_version": 0, 00:20:50.248 "zerocopy_threshold": 0 00:20:50.248 } 00:20:50.248 }, 00:20:50.248 { 00:20:50.248 "method": "sock_impl_set_options", 00:20:50.248 "params": { 00:20:50.248 "enable_ktls": false, 00:20:50.248 "enable_placement_id": 0, 00:20:50.248 "enable_quickack": false, 00:20:50.248 "enable_recv_pipe": true, 00:20:50.248 "enable_zerocopy_send_client": false, 00:20:50.248 "enable_zerocopy_send_server": true, 00:20:50.248 "impl_name": "posix", 00:20:50.248 "recv_buf_size": 2097152, 00:20:50.248 "send_buf_size": 2097152, 00:20:50.248 "tls_version": 0, 00:20:50.248 "zerocopy_threshold": 0 00:20:50.248 } 00:20:50.248 } 00:20:50.248 ] 00:20:50.248 }, 00:20:50.248 { 00:20:50.248 "subsystem": "vmd", 00:20:50.248 "config": [] 00:20:50.248 }, 00:20:50.248 { 00:20:50.248 "subsystem": "accel", 00:20:50.248 "config": [ 00:20:50.248 { 00:20:50.248 "method": "accel_set_options", 00:20:50.248 "params": { 00:20:50.248 "buf_count": 2048, 00:20:50.248 "large_cache_size": 16, 00:20:50.248 "sequence_count": 2048, 00:20:50.248 "small_cache_size": 128, 00:20:50.248 "task_count": 2048 00:20:50.248 } 00:20:50.248 } 00:20:50.248 ] 00:20:50.248 }, 00:20:50.248 { 00:20:50.248 "subsystem": "bdev", 00:20:50.248 "config": [ 00:20:50.248 { 00:20:50.248 "method": "bdev_set_options", 00:20:50.248 "params": { 00:20:50.248 "bdev_auto_examine": true, 00:20:50.248 "bdev_io_cache_size": 256, 00:20:50.248 "bdev_io_pool_size": 65535, 00:20:50.248 "iobuf_large_cache_size": 16, 00:20:50.248 "iobuf_small_cache_size": 128 00:20:50.248 } 00:20:50.248 }, 00:20:50.248 { 00:20:50.248 "method": "bdev_raid_set_options", 00:20:50.248 "params": { 00:20:50.249 "process_window_size_kb": 1024 00:20:50.249 } 00:20:50.249 }, 00:20:50.249 { 00:20:50.249 "method": "bdev_iscsi_set_options", 00:20:50.249 "params": { 00:20:50.249 "timeout_sec": 30 00:20:50.249 } 00:20:50.249 }, 00:20:50.249 { 00:20:50.249 "method": "bdev_nvme_set_options", 00:20:50.249 "params": { 00:20:50.249 "action_on_timeout": "none", 00:20:50.249 "allow_accel_sequence": false, 00:20:50.249 "arbitration_burst": 0, 00:20:50.249 "bdev_retry_count": 3, 00:20:50.249 "ctrlr_loss_timeout_sec": 0, 00:20:50.249 "delay_cmd_submit": true, 00:20:50.249 "dhchap_dhgroups": [ 00:20:50.249 "null", 00:20:50.249 "ffdhe2048", 00:20:50.249 "ffdhe3072", 00:20:50.249 "ffdhe4096", 00:20:50.249 "ffdhe6144", 00:20:50.249 "ffdhe8192" 00:20:50.249 ], 00:20:50.249 "dhchap_digests": [ 00:20:50.249 "sha256", 00:20:50.249 "sha384", 00:20:50.249 "sha512" 00:20:50.249 ], 00:20:50.249 "disable_auto_failback": false, 00:20:50.249 "fast_io_fail_timeout_sec": 0, 00:20:50.249 "generate_uuids": false, 00:20:50.249 "high_priority_weight": 0, 00:20:50.249 "io_path_stat": false, 00:20:50.249 "io_queue_requests": 512, 00:20:50.249 "keep_alive_timeout_ms": 10000, 00:20:50.249 "low_priority_weight": 0, 00:20:50.249 "medium_priority_weight": 0, 00:20:50.249 "nvme_adminq_poll_period_us": 10000, 00:20:50.249 "nvme_error_stat": false, 00:20:50.249 "nvme_ioq_poll_period_us": 0, 00:20:50.249 "rdma_cm_event_timeout_ms": 0, 00:20:50.249 "rdma_max_cq_size": 0, 00:20:50.249 "rdma_srq_size": 0, 00:20:50.249 "reconnect_delay_sec": 0, 00:20:50.249 "timeout_admin_us": 0, 00:20:50.249 "timeout_us": 0, 00:20:50.249 "transport_ack_timeout": 0, 00:20:50.249 "transport_retry_count": 4, 00:20:50.249 "transport_tos": 0 00:20:50.249 } 00:20:50.249 }, 00:20:50.249 { 00:20:50.249 "method": "bdev_nvme_attach_controller", 00:20:50.249 "params": { 00:20:50.249 "adrfam": "IPv4", 00:20:50.249 "ctrlr_loss_timeout_sec": 0, 00:20:50.249 "ddgst": false, 00:20:50.249 "fast_io_fail_timeout_sec": 0, 00:20:50.249 "hdgst": false, 00:20:50.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:50.249 "name": "nvme0", 00:20:50.249 "prchk_guard": false, 00:20:50.249 "prchk_reftag": false, 00:20:50.249 "psk": "key0", 00:20:50.249 "reconnect_delay_sec": 0, 00:20:50.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:50.249 "traddr": "10.0.0.2", 00:20:50.249 "trsvcid": "4420", 00:20:50.249 "trtype": "TCP" 00:20:50.249 } 00:20:50.249 }, 00:20:50.249 { 00:20:50.249 "method": "bdev_nvme_set_hotplug", 00:20:50.249 "params": { 00:20:50.249 "enable": false, 00:20:50.249 "period_us": 100000 00:20:50.249 } 00:20:50.249 }, 00:20:50.249 { 00:20:50.249 "method": "bdev_enable_histogram", 00:20:50.249 "params": { 00:20:50.249 "enable": true, 00:20:50.249 "name": "nvme0n1" 00:20:50.249 } 00:20:50.249 }, 00:20:50.249 { 00:20:50.249 "method": "bdev_wait_for_examine" 00:20:50.249 } 00:20:50.249 ] 00:20:50.249 }, 00:20:50.249 { 00:20:50.249 "subsystem": "nbd", 00:20:50.249 "config": [] 00:20:50.249 } 00:20:50.249 ] 00:20:50.249 }' 00:20:50.249 [2024-07-15 13:19:46.869134] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:50.249 [2024-07-15 13:19:46.869262] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100724 ] 00:20:50.509 [2024-07-15 13:19:47.007080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.509 [2024-07-15 13:19:47.113517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.772 [2024-07-15 13:19:47.287253] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:51.336 13:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:51.336 13:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:51.336 13:19:47 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:51.336 13:19:47 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:20:51.593 13:19:48 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.593 13:19:48 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:51.850 Running I/O for 1 seconds... 00:20:52.782 00:20:52.782 Latency(us) 00:20:52.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.782 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:52.782 Verification LBA range: start 0x0 length 0x2000 00:20:52.782 nvme0n1 : 1.01 3353.02 13.10 0.00 0.00 37865.76 5689.72 36938.47 00:20:52.782 =================================================================================================================== 00:20:52.782 Total : 3353.02 13.10 0.00 0.00 37865.76 5689.72 36938.47 00:20:52.782 0 00:20:52.782 13:19:49 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:20:52.782 13:19:49 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:20:52.782 13:19:49 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:52.782 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:20:52.782 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:20:52.783 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:20:52.783 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:52.783 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:20:52.783 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:20:52.783 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:20:52.783 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:52.783 nvmf_trace.0 00:20:53.040 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:20:53.040 13:19:49 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 100724 00:20:53.040 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100724 ']' 00:20:53.040 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100724 00:20:53.040 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:53.040 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:53.040 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100724 00:20:53.040 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:53.040 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:53.040 killing process with pid 100724 00:20:53.040 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100724' 00:20:53.040 Received shutdown signal, test time was about 1.000000 seconds 00:20:53.040 00:20:53.040 Latency(us) 00:20:53.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.040 =================================================================================================================== 00:20:53.040 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:53.040 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100724 00:20:53.040 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100724 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:53.298 rmmod nvme_tcp 00:20:53.298 rmmod nvme_fabrics 00:20:53.298 rmmod nvme_keyring 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 100684 ']' 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 100684 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100684 ']' 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100684 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100684 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:53.298 killing process with pid 100684 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100684' 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100684 00:20:53.298 13:19:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100684 00:20:53.556 13:19:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:53.556 13:19:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:53.556 13:19:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:53.556 13:19:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:53.556 13:19:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:53.556 13:19:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.556 13:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.556 13:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.556 13:19:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:53.556 13:19:50 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.9camjkBE6O /tmp/tmp.l9HF9KAl3d /tmp/tmp.lTia0AerDv 00:20:53.556 00:20:53.556 real 1m29.562s 00:20:53.556 user 2m24.610s 00:20:53.556 sys 0m28.736s 00:20:53.556 13:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:53.556 13:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.556 ************************************ 00:20:53.556 END TEST nvmf_tls 00:20:53.556 ************************************ 00:20:53.556 13:19:50 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:53.556 13:19:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:53.556 13:19:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:53.556 13:19:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:53.556 ************************************ 00:20:53.556 START TEST nvmf_fips 00:20:53.556 ************************************ 00:20:53.556 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:53.815 * Looking for test storage... 00:20:53.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:53.815 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:20:53.816 Error setting digest 00:20:53.816 00729BB5A47F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:53.816 00729BB5A47F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:53.816 Cannot find device "nvmf_tgt_br" 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:20:53.816 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:54.074 Cannot find device "nvmf_tgt_br2" 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:54.074 Cannot find device "nvmf_tgt_br" 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:54.074 Cannot find device "nvmf_tgt_br2" 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:54.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:54.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:54.074 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:54.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:54.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:20:54.332 00:20:54.332 --- 10.0.0.2 ping statistics --- 00:20:54.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.332 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:54.332 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:54.332 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.155 ms 00:20:54.332 00:20:54.332 --- 10.0.0.3 ping statistics --- 00:20:54.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.332 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:54.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:54.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:20:54.332 00:20:54.332 --- 10.0.0.1 ping statistics --- 00:20:54.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.332 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=101019 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 101019 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 101019 ']' 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:54.332 13:19:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:54.332 [2024-07-15 13:19:51.013912] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:54.333 [2024-07-15 13:19:51.014073] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.590 [2024-07-15 13:19:51.159460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.590 [2024-07-15 13:19:51.287912] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.590 [2024-07-15 13:19:51.288371] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.590 [2024-07-15 13:19:51.288557] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.590 [2024-07-15 13:19:51.288768] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.590 [2024-07-15 13:19:51.288931] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.590 [2024-07-15 13:19:51.289144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.523 13:19:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:55.523 13:19:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:20:55.523 13:19:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:55.523 13:19:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.523 13:19:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:55.523 13:19:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.523 13:19:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:55.523 13:19:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:55.523 13:19:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:55.523 13:19:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:55.523 13:19:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:55.523 13:19:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:55.523 13:19:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:55.523 13:19:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.523 [2024-07-15 13:19:52.241988] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.523 [2024-07-15 13:19:52.257940] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:55.523 [2024-07-15 13:19:52.258158] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.780 [2024-07-15 13:19:52.289458] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:55.780 malloc0 00:20:55.780 13:19:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:55.780 13:19:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=101072 00:20:55.780 13:19:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:55.780 13:19:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 101072 /var/tmp/bdevperf.sock 00:20:55.780 13:19:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 101072 ']' 00:20:55.780 13:19:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:55.780 13:19:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:55.781 13:19:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:55.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:55.781 13:19:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:55.781 13:19:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:55.781 [2024-07-15 13:19:52.387597] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:55.781 [2024-07-15 13:19:52.387705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101072 ] 00:20:56.038 [2024-07-15 13:19:52.525680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.038 [2024-07-15 13:19:52.638134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.970 13:19:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:56.970 13:19:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:20:56.970 13:19:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:57.227 [2024-07-15 13:19:53.734514] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:57.227 [2024-07-15 13:19:53.734647] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:57.227 TLSTESTn1 00:20:57.227 13:19:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:57.227 Running I/O for 10 seconds... 00:21:07.225 00:21:07.225 Latency(us) 00:21:07.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.225 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:07.225 Verification LBA range: start 0x0 length 0x2000 00:21:07.225 TLSTESTn1 : 10.02 4013.91 15.68 0.00 0.00 31826.51 7208.96 28597.53 00:21:07.225 =================================================================================================================== 00:21:07.225 Total : 4013.91 15.68 0.00 0.00 31826.51 7208.96 28597.53 00:21:07.225 0 00:21:07.482 13:20:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:07.482 13:20:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:07.482 13:20:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:21:07.482 13:20:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:21:07.482 13:20:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:21:07.482 13:20:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:07.482 13:20:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:21:07.482 13:20:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:21:07.482 13:20:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:21:07.483 13:20:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:07.483 nvmf_trace.0 00:21:07.483 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:21:07.483 13:20:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 101072 00:21:07.483 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 101072 ']' 00:21:07.483 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 101072 00:21:07.483 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:21:07.483 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:07.483 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101072 00:21:07.483 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:07.483 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:07.483 killing process with pid 101072 00:21:07.483 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101072' 00:21:07.483 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 101072 00:21:07.483 Received shutdown signal, test time was about 10.000000 seconds 00:21:07.483 00:21:07.483 Latency(us) 00:21:07.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.483 =================================================================================================================== 00:21:07.483 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:07.483 [2024-07-15 13:20:04.072280] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:07.483 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 101072 00:21:07.740 13:20:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:07.740 13:20:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:07.740 13:20:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:07.740 13:20:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:07.740 13:20:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:07.740 13:20:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:07.740 13:20:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:07.740 rmmod nvme_tcp 00:21:07.998 rmmod nvme_fabrics 00:21:07.998 rmmod nvme_keyring 00:21:07.998 13:20:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:07.998 13:20:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:07.998 13:20:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:07.998 13:20:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 101019 ']' 00:21:07.998 13:20:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 101019 00:21:07.998 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 101019 ']' 00:21:07.998 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 101019 00:21:07.998 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:21:07.998 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:07.998 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101019 00:21:07.998 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:07.998 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:07.998 killing process with pid 101019 00:21:07.998 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101019' 00:21:07.998 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 101019 00:21:07.998 [2024-07-15 13:20:04.538328] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:07.998 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 101019 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:21:08.256 ************************************ 00:21:08.256 END TEST nvmf_fips 00:21:08.256 ************************************ 00:21:08.256 00:21:08.256 real 0m14.604s 00:21:08.256 user 0m20.105s 00:21:08.256 sys 0m5.677s 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:08.256 13:20:04 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:21:08.256 13:20:04 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:08.256 13:20:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:08.256 13:20:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:08.256 13:20:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:08.256 ************************************ 00:21:08.256 START TEST nvmf_fuzz 00:21:08.256 ************************************ 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:08.256 * Looking for test storage... 00:21:08.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.256 13:20:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.257 13:20:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.257 13:20:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.515 13:20:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:21:08.515 13:20:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:21:08.515 13:20:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.515 13:20:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.515 13:20:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:08.515 13:20:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.515 13:20:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:08.515 13:20:05 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.515 13:20:05 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.515 13:20:05 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.515 13:20:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:08.516 Cannot find device "nvmf_tgt_br" 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:08.516 Cannot find device "nvmf_tgt_br2" 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:08.516 Cannot find device "nvmf_tgt_br" 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:08.516 Cannot find device "nvmf_tgt_br2" 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:08.516 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:08.516 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:08.516 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:08.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:21:08.775 00:21:08.775 --- 10.0.0.2 ping statistics --- 00:21:08.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.775 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:08.775 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:08.775 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:21:08.775 00:21:08.775 --- 10.0.0.3 ping statistics --- 00:21:08.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.775 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:08.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:21:08.775 00:21:08.775 --- 10.0.0.1 ping statistics --- 00:21:08.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.775 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=101420 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 101420 00:21:08.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 101420 ']' 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:08.775 13:20:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:09.711 13:20:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:09.711 13:20:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:21:09.711 13:20:06 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:09.711 13:20:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.711 13:20:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:09.968 13:20:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.968 13:20:06 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:09.968 13:20:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.968 13:20:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:09.968 Malloc0 00:21:09.968 13:20:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.968 13:20:06 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:09.968 13:20:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.968 13:20:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:09.968 13:20:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.968 13:20:06 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:09.968 13:20:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.968 13:20:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:09.968 13:20:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.969 13:20:06 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:09.969 13:20:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.969 13:20:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:09.969 13:20:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.969 13:20:06 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:09.969 13:20:06 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:21:10.226 Shutting down the fuzz application 00:21:10.226 13:20:06 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:10.804 Shutting down the fuzz application 00:21:10.804 13:20:07 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:10.804 13:20:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.804 13:20:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:10.804 13:20:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.804 13:20:07 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:10.804 13:20:07 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:10.804 13:20:07 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:10.804 13:20:07 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:21:10.804 13:20:07 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:10.804 13:20:07 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:21:10.804 13:20:07 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:10.804 13:20:07 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:10.804 rmmod nvme_tcp 00:21:10.804 rmmod nvme_fabrics 00:21:10.804 rmmod nvme_keyring 00:21:11.068 13:20:07 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:11.068 13:20:07 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:21:11.068 13:20:07 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:21:11.068 13:20:07 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 101420 ']' 00:21:11.068 13:20:07 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 101420 00:21:11.068 13:20:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 101420 ']' 00:21:11.068 13:20:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 101420 00:21:11.068 13:20:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:21:11.068 13:20:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:11.068 13:20:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101420 00:21:11.068 killing process with pid 101420 00:21:11.068 13:20:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:11.068 13:20:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:11.068 13:20:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101420' 00:21:11.068 13:20:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 101420 00:21:11.068 13:20:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 101420 00:21:11.330 13:20:07 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:11.330 13:20:07 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:11.330 13:20:07 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:11.330 13:20:07 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.330 13:20:07 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:11.330 13:20:07 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.330 13:20:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.330 13:20:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.330 13:20:07 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:11.330 13:20:07 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:21:11.330 00:21:11.330 real 0m2.965s 00:21:11.330 user 0m3.165s 00:21:11.330 sys 0m0.726s 00:21:11.331 13:20:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:11.331 ************************************ 00:21:11.331 END TEST nvmf_fuzz 00:21:11.331 ************************************ 00:21:11.331 13:20:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:11.331 13:20:07 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:11.331 13:20:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:11.331 13:20:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:11.331 13:20:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:11.331 ************************************ 00:21:11.331 START TEST nvmf_multiconnection 00:21:11.331 ************************************ 00:21:11.331 13:20:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:11.331 * Looking for test storage... 00:21:11.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:11.331 13:20:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:11.331 13:20:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:21:11.331 13:20:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:11.331 13:20:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:11.331 13:20:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:11.331 13:20:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:11.331 13:20:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.331 13:20:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.331 13:20:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.331 13:20:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.331 13:20:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.331 13:20:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.331 13:20:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:21:11.331 13:20:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:21:11.331 13:20:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.331 13:20:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.331 13:20:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:11.331 13:20:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:11.331 13:20:07 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:11.331 Cannot find device "nvmf_tgt_br" 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:11.331 Cannot find device "nvmf_tgt_br2" 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:11.331 Cannot find device "nvmf_tgt_br" 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:21:11.331 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:11.588 Cannot find device "nvmf_tgt_br2" 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:11.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:11.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:11.589 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:11.847 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:11.847 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:11.847 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:11.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:21:11.847 00:21:11.847 --- 10.0.0.2 ping statistics --- 00:21:11.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.847 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:21:11.847 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:11.847 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:11.847 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:21:11.847 00:21:11.847 --- 10.0.0.3 ping statistics --- 00:21:11.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.847 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:11.847 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:11.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:21:11.847 00:21:11.847 --- 10.0.0.1 ping statistics --- 00:21:11.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.847 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:21:11.847 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.847 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:21:11.848 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:11.848 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.848 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:11.848 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:11.848 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.848 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:11.848 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:11.848 13:20:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:11.848 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:11.848 13:20:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:11.848 13:20:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:11.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.848 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=101638 00:21:11.848 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:11.848 13:20:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 101638 00:21:11.848 13:20:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 101638 ']' 00:21:11.848 13:20:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.848 13:20:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:11.848 13:20:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.848 13:20:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:11.848 13:20:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:11.848 [2024-07-15 13:20:08.465549] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:21:11.848 [2024-07-15 13:20:08.466292] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.105 [2024-07-15 13:20:08.614484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:12.105 [2024-07-15 13:20:08.722157] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.105 [2024-07-15 13:20:08.722473] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.105 [2024-07-15 13:20:08.722705] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.105 [2024-07-15 13:20:08.722847] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.105 [2024-07-15 13:20:08.722963] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.105 [2024-07-15 13:20:08.723192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.105 [2024-07-15 13:20:08.723331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.105 [2024-07-15 13:20:08.723415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.105 [2024-07-15 13:20:08.723415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 [2024-07-15 13:20:09.464977] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 Malloc1 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 [2024-07-15 13:20:09.535429] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 Malloc2 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 Malloc3 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 Malloc4 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.038 Malloc5 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.038 Malloc6 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.038 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.317 Malloc7 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.317 Malloc8 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.317 Malloc9 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.317 Malloc10 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.317 Malloc11 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.317 13:20:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.317 13:20:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 13:20:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:13.317 13:20:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.317 13:20:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.317 13:20:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 13:20:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:21:13.317 13:20:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.317 13:20:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:13.317 13:20:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 13:20:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:21:13.317 13:20:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.317 13:20:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:13.575 13:20:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:13.575 13:20:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:13.575 13:20:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:13.575 13:20:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:13.575 13:20:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:15.473 13:20:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:15.473 13:20:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:15.473 13:20:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:21:15.731 13:20:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:15.731 13:20:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:15.731 13:20:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:15.731 13:20:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:15.731 13:20:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:21:15.732 13:20:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:21:15.732 13:20:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:15.732 13:20:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:15.732 13:20:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:15.732 13:20:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:18.255 13:20:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:18.255 13:20:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:21:18.255 13:20:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:18.255 13:20:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:18.255 13:20:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:18.255 13:20:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:18.255 13:20:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:18.256 13:20:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:21:18.256 13:20:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:18.256 13:20:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:18.256 13:20:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:18.256 13:20:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:18.256 13:20:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:20.161 13:20:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:20.161 13:20:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:20.161 13:20:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:21:20.161 13:20:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:20.161 13:20:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:20.161 13:20:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:20.161 13:20:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:20.161 13:20:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:21:20.161 13:20:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:20.161 13:20:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:20.161 13:20:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:20.161 13:20:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:20.161 13:20:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:22.058 13:20:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:22.058 13:20:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:22.058 13:20:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:21:22.058 13:20:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:22.058 13:20:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:22.058 13:20:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:22.058 13:20:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:22.058 13:20:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:21:22.321 13:20:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:21:22.321 13:20:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:22.321 13:20:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:22.321 13:20:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:22.321 13:20:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:24.221 13:20:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:24.221 13:20:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:24.221 13:20:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:21:24.479 13:20:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:24.479 13:20:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:24.479 13:20:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:24.479 13:20:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:24.479 13:20:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:21:24.479 13:20:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:21:24.479 13:20:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:24.479 13:20:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:24.479 13:20:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:24.479 13:20:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:27.014 13:20:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:27.014 13:20:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:27.014 13:20:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:21:27.014 13:20:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:27.014 13:20:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:27.014 13:20:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:27.014 13:20:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:27.014 13:20:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:21:27.014 13:20:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:21:27.014 13:20:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:27.014 13:20:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:27.014 13:20:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:27.014 13:20:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:28.925 13:20:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:28.925 13:20:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:28.925 13:20:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:21:28.925 13:20:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:28.925 13:20:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:28.925 13:20:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:28.925 13:20:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:28.925 13:20:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:21:28.925 13:20:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:21:28.925 13:20:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:28.925 13:20:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:28.925 13:20:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:28.925 13:20:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:30.856 13:20:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:30.856 13:20:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:30.856 13:20:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:21:30.856 13:20:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:30.856 13:20:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:30.856 13:20:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:30.856 13:20:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.856 13:20:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:21:31.114 13:20:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:21:31.114 13:20:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:31.114 13:20:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:31.114 13:20:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:31.114 13:20:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:33.008 13:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:33.008 13:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:33.008 13:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:21:33.266 13:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:33.266 13:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:33.266 13:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:33.266 13:20:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:33.266 13:20:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:21:33.266 13:20:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:21:33.266 13:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:33.266 13:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:33.266 13:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:33.266 13:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:35.814 13:20:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:35.814 13:20:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:21:35.814 13:20:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:35.814 13:20:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:35.814 13:20:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:35.814 13:20:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:35.814 13:20:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:35.814 13:20:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:21:35.814 13:20:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:21:35.814 13:20:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:35.814 13:20:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:35.814 13:20:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:35.814 13:20:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:37.710 13:20:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:37.710 13:20:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:21:37.710 13:20:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:37.710 13:20:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:37.710 13:20:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:37.710 13:20:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:37.710 13:20:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:21:37.710 [global] 00:21:37.710 thread=1 00:21:37.710 invalidate=1 00:21:37.710 rw=read 00:21:37.710 time_based=1 00:21:37.710 runtime=10 00:21:37.710 ioengine=libaio 00:21:37.710 direct=1 00:21:37.710 bs=262144 00:21:37.710 iodepth=64 00:21:37.710 norandommap=1 00:21:37.710 numjobs=1 00:21:37.710 00:21:37.710 [job0] 00:21:37.710 filename=/dev/nvme0n1 00:21:37.710 [job1] 00:21:37.710 filename=/dev/nvme10n1 00:21:37.710 [job2] 00:21:37.710 filename=/dev/nvme1n1 00:21:37.710 [job3] 00:21:37.710 filename=/dev/nvme2n1 00:21:37.710 [job4] 00:21:37.710 filename=/dev/nvme3n1 00:21:37.710 [job5] 00:21:37.710 filename=/dev/nvme4n1 00:21:37.710 [job6] 00:21:37.710 filename=/dev/nvme5n1 00:21:37.710 [job7] 00:21:37.710 filename=/dev/nvme6n1 00:21:37.710 [job8] 00:21:37.710 filename=/dev/nvme7n1 00:21:37.710 [job9] 00:21:37.710 filename=/dev/nvme8n1 00:21:37.710 [job10] 00:21:37.710 filename=/dev/nvme9n1 00:21:37.710 Could not set queue depth (nvme0n1) 00:21:37.710 Could not set queue depth (nvme10n1) 00:21:37.710 Could not set queue depth (nvme1n1) 00:21:37.710 Could not set queue depth (nvme2n1) 00:21:37.710 Could not set queue depth (nvme3n1) 00:21:37.710 Could not set queue depth (nvme4n1) 00:21:37.710 Could not set queue depth (nvme5n1) 00:21:37.710 Could not set queue depth (nvme6n1) 00:21:37.710 Could not set queue depth (nvme7n1) 00:21:37.710 Could not set queue depth (nvme8n1) 00:21:37.710 Could not set queue depth (nvme9n1) 00:21:37.710 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.710 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.710 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.710 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.710 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.710 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.710 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.710 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.710 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.710 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.710 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.710 fio-3.35 00:21:37.710 Starting 11 threads 00:21:49.908 00:21:49.908 job0: (groupid=0, jobs=1): err= 0: pid=102108: Mon Jul 15 13:20:44 2024 00:21:49.908 read: IOPS=659, BW=165MiB/s (173MB/s)(1664MiB/10096msec) 00:21:49.908 slat (usec): min=17, max=76204, avg=1498.04, stdev=5586.07 00:21:49.908 clat (msec): min=19, max=207, avg=95.43, stdev=17.94 00:21:49.908 lat (msec): min=19, max=207, avg=96.92, stdev=18.77 00:21:49.908 clat percentiles (msec): 00:21:49.908 | 1.00th=[ 59], 5.00th=[ 75], 10.00th=[ 79], 20.00th=[ 84], 00:21:49.908 | 30.00th=[ 87], 40.00th=[ 90], 50.00th=[ 93], 60.00th=[ 95], 00:21:49.908 | 70.00th=[ 99], 80.00th=[ 105], 90.00th=[ 121], 95.00th=[ 130], 00:21:49.908 | 99.00th=[ 153], 99.50th=[ 176], 99.90th=[ 207], 99.95th=[ 209], 00:21:49.908 | 99.99th=[ 209] 00:21:49.908 bw ( KiB/s): min=121344, max=200704, per=9.29%, avg=168668.55, stdev=21674.23, samples=20 00:21:49.908 iops : min= 474, max= 784, avg=658.80, stdev=84.64, samples=20 00:21:49.908 lat (msec) : 20=0.06%, 50=0.66%, 100=73.11%, 250=26.16% 00:21:49.908 cpu : usr=0.29%, sys=2.39%, ctx=1268, majf=0, minf=4097 00:21:49.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:21:49.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:49.908 issued rwts: total=6654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:49.908 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:49.908 job1: (groupid=0, jobs=1): err= 0: pid=102109: Mon Jul 15 13:20:44 2024 00:21:49.908 read: IOPS=638, BW=160MiB/s (167MB/s)(1614MiB/10105msec) 00:21:49.908 slat (usec): min=17, max=61902, avg=1540.35, stdev=5210.72 00:21:49.908 clat (msec): min=23, max=229, avg=98.44, stdev=17.59 00:21:49.908 lat (msec): min=25, max=229, avg=99.98, stdev=18.34 00:21:49.908 clat percentiles (msec): 00:21:49.908 | 1.00th=[ 65], 5.00th=[ 77], 10.00th=[ 82], 20.00th=[ 87], 00:21:49.908 | 30.00th=[ 90], 40.00th=[ 93], 50.00th=[ 96], 60.00th=[ 100], 00:21:49.908 | 70.00th=[ 104], 80.00th=[ 108], 90.00th=[ 124], 95.00th=[ 132], 00:21:49.908 | 99.00th=[ 148], 99.50th=[ 159], 99.90th=[ 230], 99.95th=[ 230], 00:21:49.908 | 99.99th=[ 230] 00:21:49.908 bw ( KiB/s): min=116224, max=196608, per=9.01%, avg=163557.40, stdev=20323.87, samples=20 00:21:49.908 iops : min= 454, max= 768, avg=638.75, stdev=79.43, samples=20 00:21:49.908 lat (msec) : 50=0.22%, 100=62.48%, 250=37.30% 00:21:49.908 cpu : usr=0.27%, sys=2.09%, ctx=1169, majf=0, minf=4097 00:21:49.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:21:49.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:49.908 issued rwts: total=6456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:49.908 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:49.908 job2: (groupid=0, jobs=1): err= 0: pid=102110: Mon Jul 15 13:20:44 2024 00:21:49.908 read: IOPS=584, BW=146MiB/s (153MB/s)(1476MiB/10096msec) 00:21:49.908 slat (usec): min=18, max=90286, avg=1678.30, stdev=6334.39 00:21:49.908 clat (msec): min=53, max=218, avg=107.59, stdev=20.24 00:21:49.908 lat (msec): min=53, max=256, avg=109.27, stdev=21.21 00:21:49.908 clat percentiles (msec): 00:21:49.908 | 1.00th=[ 64], 5.00th=[ 79], 10.00th=[ 85], 20.00th=[ 91], 00:21:49.908 | 30.00th=[ 96], 40.00th=[ 101], 50.00th=[ 106], 60.00th=[ 113], 00:21:49.908 | 70.00th=[ 120], 80.00th=[ 126], 90.00th=[ 132], 95.00th=[ 140], 00:21:49.908 | 99.00th=[ 165], 99.50th=[ 174], 99.90th=[ 213], 99.95th=[ 220], 00:21:49.908 | 99.99th=[ 220] 00:21:49.908 bw ( KiB/s): min=96768, max=201216, per=8.24%, avg=149527.05, stdev=26225.45, samples=20 00:21:49.908 iops : min= 378, max= 786, avg=584.00, stdev=102.48, samples=20 00:21:49.908 lat (msec) : 100=40.20%, 250=59.80% 00:21:49.908 cpu : usr=0.20%, sys=2.17%, ctx=1336, majf=0, minf=4097 00:21:49.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:21:49.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:49.908 issued rwts: total=5905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:49.908 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:49.908 job3: (groupid=0, jobs=1): err= 0: pid=102111: Mon Jul 15 13:20:44 2024 00:21:49.908 read: IOPS=553, BW=138MiB/s (145MB/s)(1396MiB/10091msec) 00:21:49.908 slat (usec): min=17, max=86252, avg=1787.09, stdev=6785.93 00:21:49.908 clat (msec): min=15, max=214, avg=113.68, stdev=22.89 00:21:49.908 lat (msec): min=16, max=239, avg=115.47, stdev=23.93 00:21:49.908 clat percentiles (msec): 00:21:49.908 | 1.00th=[ 42], 5.00th=[ 83], 10.00th=[ 89], 20.00th=[ 95], 00:21:49.908 | 30.00th=[ 101], 40.00th=[ 108], 50.00th=[ 113], 60.00th=[ 120], 00:21:49.908 | 70.00th=[ 125], 80.00th=[ 131], 90.00th=[ 142], 95.00th=[ 150], 00:21:49.908 | 99.00th=[ 176], 99.50th=[ 178], 99.90th=[ 215], 99.95th=[ 215], 00:21:49.908 | 99.99th=[ 215] 00:21:49.908 bw ( KiB/s): min=108544, max=182272, per=7.78%, avg=141254.90, stdev=22418.36, samples=20 00:21:49.908 iops : min= 424, max= 712, avg=551.60, stdev=87.63, samples=20 00:21:49.908 lat (msec) : 20=0.11%, 50=0.97%, 100=28.44%, 250=70.48% 00:21:49.908 cpu : usr=0.17%, sys=1.84%, ctx=1151, majf=0, minf=4097 00:21:49.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:49.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:49.908 issued rwts: total=5583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:49.908 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:49.908 job4: (groupid=0, jobs=1): err= 0: pid=102112: Mon Jul 15 13:20:44 2024 00:21:49.908 read: IOPS=545, BW=136MiB/s (143MB/s)(1375MiB/10089msec) 00:21:49.908 slat (usec): min=16, max=81255, avg=1805.42, stdev=6673.65 00:21:49.908 clat (msec): min=25, max=216, avg=115.37, stdev=22.09 00:21:49.908 lat (msec): min=26, max=227, avg=117.17, stdev=23.20 00:21:49.908 clat percentiles (msec): 00:21:49.908 | 1.00th=[ 60], 5.00th=[ 84], 10.00th=[ 90], 20.00th=[ 97], 00:21:49.908 | 30.00th=[ 104], 40.00th=[ 110], 50.00th=[ 116], 60.00th=[ 121], 00:21:49.908 | 70.00th=[ 126], 80.00th=[ 134], 90.00th=[ 144], 95.00th=[ 150], 00:21:49.908 | 99.00th=[ 167], 99.50th=[ 180], 99.90th=[ 192], 99.95th=[ 215], 00:21:49.908 | 99.99th=[ 218] 00:21:49.908 bw ( KiB/s): min=99840, max=178331, per=7.66%, avg=139097.15, stdev=22245.53, samples=20 00:21:49.908 iops : min= 390, max= 696, avg=543.15, stdev=86.91, samples=20 00:21:49.908 lat (msec) : 50=0.65%, 100=23.88%, 250=75.47% 00:21:49.908 cpu : usr=0.20%, sys=1.83%, ctx=988, majf=0, minf=4097 00:21:49.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:49.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:49.908 issued rwts: total=5499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:49.908 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:49.908 job5: (groupid=0, jobs=1): err= 0: pid=102113: Mon Jul 15 13:20:44 2024 00:21:49.908 read: IOPS=731, BW=183MiB/s (192MB/s)(1846MiB/10092msec) 00:21:49.908 slat (usec): min=17, max=104122, avg=1332.34, stdev=5611.80 00:21:49.908 clat (msec): min=13, max=219, avg=86.01, stdev=45.74 00:21:49.908 lat (msec): min=14, max=253, avg=87.35, stdev=46.69 00:21:49.908 clat percentiles (msec): 00:21:49.908 | 1.00th=[ 21], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 34], 00:21:49.908 | 30.00th=[ 42], 40.00th=[ 62], 50.00th=[ 89], 60.00th=[ 116], 00:21:49.908 | 70.00th=[ 124], 80.00th=[ 131], 90.00th=[ 142], 95.00th=[ 150], 00:21:49.908 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 209], 99.95th=[ 220], 00:21:49.908 | 99.99th=[ 220] 00:21:49.908 bw ( KiB/s): min=102912, max=505344, per=10.32%, avg=187335.20, stdev=117124.58, samples=20 00:21:49.908 iops : min= 402, max= 1974, avg=731.60, stdev=457.61, samples=20 00:21:49.908 lat (msec) : 20=0.89%, 50=30.56%, 100=19.68%, 250=48.87% 00:21:49.908 cpu : usr=0.24%, sys=2.26%, ctx=1430, majf=0, minf=4097 00:21:49.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:21:49.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:49.908 issued rwts: total=7383,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:49.908 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:49.908 job6: (groupid=0, jobs=1): err= 0: pid=102114: Mon Jul 15 13:20:44 2024 00:21:49.908 read: IOPS=652, BW=163MiB/s (171MB/s)(1650MiB/10105msec) 00:21:49.908 slat (usec): min=14, max=54090, avg=1508.28, stdev=5234.19 00:21:49.908 clat (msec): min=5, max=214, avg=96.33, stdev=19.74 00:21:49.908 lat (msec): min=5, max=214, avg=97.83, stdev=20.42 00:21:49.908 clat percentiles (msec): 00:21:49.908 | 1.00th=[ 25], 5.00th=[ 73], 10.00th=[ 80], 20.00th=[ 85], 00:21:49.908 | 30.00th=[ 87], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 97], 00:21:49.908 | 70.00th=[ 102], 80.00th=[ 109], 90.00th=[ 123], 95.00th=[ 131], 00:21:49.908 | 99.00th=[ 148], 99.50th=[ 165], 99.90th=[ 213], 99.95th=[ 215], 00:21:49.908 | 99.99th=[ 215] 00:21:49.908 bw ( KiB/s): min=118272, max=195584, per=9.21%, avg=167190.20, stdev=22219.30, samples=20 00:21:49.908 iops : min= 462, max= 764, avg=652.95, stdev=86.84, samples=20 00:21:49.908 lat (msec) : 10=0.14%, 20=0.36%, 50=1.09%, 100=65.46%, 250=32.95% 00:21:49.908 cpu : usr=0.17%, sys=2.21%, ctx=1279, majf=0, minf=4097 00:21:49.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:21:49.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:49.908 issued rwts: total=6598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:49.908 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:49.908 job7: (groupid=0, jobs=1): err= 0: pid=102115: Mon Jul 15 13:20:44 2024 00:21:49.908 read: IOPS=600, BW=150MiB/s (157MB/s)(1510MiB/10057msec) 00:21:49.908 slat (usec): min=13, max=72879, avg=1606.88, stdev=5532.33 00:21:49.908 clat (msec): min=10, max=204, avg=104.74, stdev=26.31 00:21:49.908 lat (msec): min=10, max=220, avg=106.34, stdev=27.13 00:21:49.908 clat percentiles (msec): 00:21:49.908 | 1.00th=[ 54], 5.00th=[ 64], 10.00th=[ 71], 20.00th=[ 84], 00:21:49.908 | 30.00th=[ 90], 40.00th=[ 96], 50.00th=[ 104], 60.00th=[ 110], 00:21:49.908 | 70.00th=[ 120], 80.00th=[ 128], 90.00th=[ 142], 95.00th=[ 153], 00:21:49.908 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 182], 99.95th=[ 186], 00:21:49.908 | 99.99th=[ 205] 00:21:49.908 bw ( KiB/s): min=101888, max=232448, per=8.43%, avg=152974.75, stdev=34830.55, samples=20 00:21:49.908 iops : min= 398, max= 908, avg=597.40, stdev=135.97, samples=20 00:21:49.908 lat (msec) : 20=0.38%, 50=0.20%, 100=45.12%, 250=54.30% 00:21:49.908 cpu : usr=0.21%, sys=2.05%, ctx=1299, majf=0, minf=4097 00:21:49.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:21:49.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:49.909 issued rwts: total=6039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:49.909 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:49.909 job8: (groupid=0, jobs=1): err= 0: pid=102116: Mon Jul 15 13:20:44 2024 00:21:49.909 read: IOPS=769, BW=192MiB/s (202MB/s)(1942MiB/10091msec) 00:21:49.909 slat (usec): min=16, max=87739, avg=1251.56, stdev=5327.09 00:21:49.909 clat (msec): min=4, max=230, avg=81.69, stdev=49.36 00:21:49.909 lat (msec): min=4, max=231, avg=82.94, stdev=50.29 00:21:49.909 clat percentiles (msec): 00:21:49.909 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 25], 20.00th=[ 32], 00:21:49.909 | 30.00th=[ 37], 40.00th=[ 56], 50.00th=[ 69], 60.00th=[ 113], 00:21:49.909 | 70.00th=[ 124], 80.00th=[ 130], 90.00th=[ 142], 95.00th=[ 157], 00:21:49.909 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 224], 99.95th=[ 224], 00:21:49.909 | 99.99th=[ 230] 00:21:49.909 bw ( KiB/s): min=96768, max=529920, per=10.86%, avg=197218.10, stdev=138084.48, samples=20 00:21:49.909 iops : min= 378, max= 2070, avg=770.15, stdev=539.38, samples=20 00:21:49.909 lat (msec) : 10=3.10%, 20=2.69%, 50=32.30%, 100=15.94%, 250=45.96% 00:21:49.909 cpu : usr=0.31%, sys=2.43%, ctx=1546, majf=0, minf=4097 00:21:49.909 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:49.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:49.909 issued rwts: total=7767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:49.909 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:49.909 job9: (groupid=0, jobs=1): err= 0: pid=102117: Mon Jul 15 13:20:44 2024 00:21:49.909 read: IOPS=760, BW=190MiB/s (199MB/s)(1906MiB/10026msec) 00:21:49.909 slat (usec): min=18, max=101555, avg=1279.06, stdev=5208.72 00:21:49.909 clat (usec): min=1197, max=186113, avg=82735.35, stdev=38909.06 00:21:49.909 lat (usec): min=1264, max=246946, avg=84014.42, stdev=39759.28 00:21:49.909 clat percentiles (msec): 00:21:49.909 | 1.00th=[ 14], 5.00th=[ 23], 10.00th=[ 27], 20.00th=[ 33], 00:21:49.909 | 30.00th=[ 50], 40.00th=[ 88], 50.00th=[ 94], 60.00th=[ 99], 00:21:49.909 | 70.00th=[ 106], 80.00th=[ 117], 90.00th=[ 127], 95.00th=[ 136], 00:21:49.909 | 99.00th=[ 153], 99.50th=[ 169], 99.90th=[ 186], 99.95th=[ 186], 00:21:49.909 | 99.99th=[ 186] 00:21:49.909 bw ( KiB/s): min=107008, max=536064, per=10.66%, avg=193548.35, stdev=118250.55, samples=20 00:21:49.909 iops : min= 418, max= 2094, avg=756.00, stdev=461.93, samples=20 00:21:49.909 lat (msec) : 2=0.03%, 10=0.51%, 20=3.49%, 50=26.11%, 100=32.26% 00:21:49.909 lat (msec) : 250=37.60% 00:21:49.909 cpu : usr=0.31%, sys=2.63%, ctx=1454, majf=0, minf=4097 00:21:49.909 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:49.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:49.909 issued rwts: total=7622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:49.909 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:49.909 job10: (groupid=0, jobs=1): err= 0: pid=102118: Mon Jul 15 13:20:44 2024 00:21:49.909 read: IOPS=611, BW=153MiB/s (160MB/s)(1539MiB/10057msec) 00:21:49.909 slat (usec): min=17, max=110196, avg=1620.27, stdev=5848.22 00:21:49.909 clat (msec): min=43, max=178, avg=102.78, stdev=21.82 00:21:49.909 lat (msec): min=43, max=237, avg=104.40, stdev=22.72 00:21:49.909 clat percentiles (msec): 00:21:49.909 | 1.00th=[ 56], 5.00th=[ 66], 10.00th=[ 73], 20.00th=[ 86], 00:21:49.909 | 30.00th=[ 92], 40.00th=[ 97], 50.00th=[ 102], 60.00th=[ 107], 00:21:49.909 | 70.00th=[ 114], 80.00th=[ 124], 90.00th=[ 132], 95.00th=[ 138], 00:21:49.909 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 180], 99.95th=[ 180], 00:21:49.909 | 99.99th=[ 180] 00:21:49.909 bw ( KiB/s): min=96574, max=246252, per=8.59%, avg=155931.05, stdev=33586.62, samples=20 00:21:49.909 iops : min= 377, max= 961, avg=608.95, stdev=131.17, samples=20 00:21:49.909 lat (msec) : 50=0.31%, 100=46.30%, 250=53.40% 00:21:49.909 cpu : usr=0.23%, sys=2.17%, ctx=1364, majf=0, minf=4097 00:21:49.909 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:21:49.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:49.909 issued rwts: total=6154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:49.909 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:49.909 00:21:49.909 Run status group 0 (all jobs): 00:21:49.909 READ: bw=1773MiB/s (1859MB/s), 136MiB/s-192MiB/s (143MB/s-202MB/s), io=17.5GiB (18.8GB), run=10026-10105msec 00:21:49.909 00:21:49.909 Disk stats (read/write): 00:21:49.909 nvme0n1: ios=13181/0, merge=0/0, ticks=1236372/0, in_queue=1236372, util=97.45% 00:21:49.909 nvme10n1: ios=12785/0, merge=0/0, ticks=1237939/0, in_queue=1237939, util=97.59% 00:21:49.909 nvme1n1: ios=11682/0, merge=0/0, ticks=1239362/0, in_queue=1239362, util=97.86% 00:21:49.909 nvme2n1: ios=11066/0, merge=0/0, ticks=1240037/0, in_queue=1240037, util=97.97% 00:21:49.909 nvme3n1: ios=10876/0, merge=0/0, ticks=1239913/0, in_queue=1239913, util=98.00% 00:21:49.909 nvme4n1: ios=14646/0, merge=0/0, ticks=1235671/0, in_queue=1235671, util=98.00% 00:21:49.909 nvme5n1: ios=13068/0, merge=0/0, ticks=1236138/0, in_queue=1236138, util=98.14% 00:21:49.909 nvme6n1: ios=11958/0, merge=0/0, ticks=1241574/0, in_queue=1241574, util=98.27% 00:21:49.909 nvme7n1: ios=15435/0, merge=0/0, ticks=1233242/0, in_queue=1233242, util=98.50% 00:21:49.909 nvme8n1: ios=15146/0, merge=0/0, ticks=1242490/0, in_queue=1242490, util=98.79% 00:21:49.909 nvme9n1: ios=12197/0, merge=0/0, ticks=1244173/0, in_queue=1244173, util=99.00% 00:21:49.909 13:20:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:21:49.909 [global] 00:21:49.909 thread=1 00:21:49.909 invalidate=1 00:21:49.909 rw=randwrite 00:21:49.909 time_based=1 00:21:49.909 runtime=10 00:21:49.909 ioengine=libaio 00:21:49.909 direct=1 00:21:49.909 bs=262144 00:21:49.909 iodepth=64 00:21:49.909 norandommap=1 00:21:49.909 numjobs=1 00:21:49.909 00:21:49.909 [job0] 00:21:49.909 filename=/dev/nvme0n1 00:21:49.909 [job1] 00:21:49.909 filename=/dev/nvme10n1 00:21:49.909 [job2] 00:21:49.909 filename=/dev/nvme1n1 00:21:49.909 [job3] 00:21:49.909 filename=/dev/nvme2n1 00:21:49.909 [job4] 00:21:49.909 filename=/dev/nvme3n1 00:21:49.909 [job5] 00:21:49.909 filename=/dev/nvme4n1 00:21:49.909 [job6] 00:21:49.909 filename=/dev/nvme5n1 00:21:49.909 [job7] 00:21:49.909 filename=/dev/nvme6n1 00:21:49.909 [job8] 00:21:49.909 filename=/dev/nvme7n1 00:21:49.909 [job9] 00:21:49.909 filename=/dev/nvme8n1 00:21:49.909 [job10] 00:21:49.909 filename=/dev/nvme9n1 00:21:49.909 Could not set queue depth (nvme0n1) 00:21:49.909 Could not set queue depth (nvme10n1) 00:21:49.909 Could not set queue depth (nvme1n1) 00:21:49.909 Could not set queue depth (nvme2n1) 00:21:49.909 Could not set queue depth (nvme3n1) 00:21:49.909 Could not set queue depth (nvme4n1) 00:21:49.909 Could not set queue depth (nvme5n1) 00:21:49.909 Could not set queue depth (nvme6n1) 00:21:49.909 Could not set queue depth (nvme7n1) 00:21:49.909 Could not set queue depth (nvme8n1) 00:21:49.909 Could not set queue depth (nvme9n1) 00:21:49.909 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:49.909 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:49.909 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:49.909 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:49.909 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:49.909 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:49.909 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:49.909 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:49.909 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:49.909 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:49.909 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:49.909 fio-3.35 00:21:49.909 Starting 11 threads 00:21:59.871 00:21:59.871 job0: (groupid=0, jobs=1): err= 0: pid=102319: Mon Jul 15 13:20:55 2024 00:21:59.871 write: IOPS=556, BW=139MiB/s (146MB/s)(1407MiB/10113msec); 0 zone resets 00:21:59.871 slat (usec): min=23, max=26773, avg=1772.45, stdev=3010.97 00:21:59.871 clat (msec): min=7, max=225, avg=113.16, stdev=10.15 00:21:59.871 lat (msec): min=7, max=225, avg=114.93, stdev= 9.85 00:21:59.871 clat percentiles (msec): 00:21:59.871 | 1.00th=[ 105], 5.00th=[ 107], 10.00th=[ 107], 20.00th=[ 109], 00:21:59.871 | 30.00th=[ 113], 40.00th=[ 113], 50.00th=[ 114], 60.00th=[ 114], 00:21:59.871 | 70.00th=[ 115], 80.00th=[ 115], 90.00th=[ 118], 95.00th=[ 122], 00:21:59.871 | 99.00th=[ 131], 99.50th=[ 176], 99.90th=[ 220], 99.95th=[ 220], 00:21:59.871 | 99.99th=[ 226] 00:21:59.871 bw ( KiB/s): min=131072, max=145920, per=8.47%, avg=142424.05, stdev=3657.32, samples=20 00:21:59.871 iops : min= 512, max= 570, avg=556.30, stdev=14.28, samples=20 00:21:59.871 lat (msec) : 10=0.12%, 20=0.07%, 50=0.09%, 100=0.55%, 250=99.16% 00:21:59.871 cpu : usr=1.31%, sys=1.57%, ctx=7420, majf=0, minf=1 00:21:59.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:59.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.871 issued rwts: total=0,5627,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.871 job1: (groupid=0, jobs=1): err= 0: pid=102320: Mon Jul 15 13:20:55 2024 00:21:59.871 write: IOPS=557, BW=139MiB/s (146MB/s)(1408MiB/10104msec); 0 zone resets 00:21:59.871 slat (usec): min=24, max=10724, avg=1770.55, stdev=3006.80 00:21:59.871 clat (msec): min=6, max=220, avg=113.02, stdev= 9.75 00:21:59.871 lat (msec): min=6, max=220, avg=114.79, stdev= 9.43 00:21:59.871 clat percentiles (msec): 00:21:59.871 | 1.00th=[ 104], 5.00th=[ 107], 10.00th=[ 107], 20.00th=[ 109], 00:21:59.871 | 30.00th=[ 113], 40.00th=[ 113], 50.00th=[ 114], 60.00th=[ 114], 00:21:59.871 | 70.00th=[ 115], 80.00th=[ 115], 90.00th=[ 118], 95.00th=[ 122], 00:21:59.871 | 99.00th=[ 131], 99.50th=[ 169], 99.90th=[ 213], 99.95th=[ 213], 00:21:59.871 | 99.99th=[ 222] 00:21:59.871 bw ( KiB/s): min=131584, max=145920, per=8.47%, avg=142540.80, stdev=3625.72, samples=20 00:21:59.871 iops : min= 514, max= 570, avg=556.80, stdev=14.16, samples=20 00:21:59.871 lat (msec) : 10=0.02%, 20=0.07%, 50=0.28%, 100=0.57%, 250=99.06% 00:21:59.871 cpu : usr=1.53%, sys=1.41%, ctx=4808, majf=0, minf=1 00:21:59.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:59.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.871 issued rwts: total=0,5631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.871 job2: (groupid=0, jobs=1): err= 0: pid=102332: Mon Jul 15 13:20:55 2024 00:21:59.871 write: IOPS=770, BW=193MiB/s (202MB/s)(1939MiB/10068msec); 0 zone resets 00:21:59.871 slat (usec): min=21, max=15826, avg=1284.39, stdev=2180.95 00:21:59.871 clat (msec): min=20, max=144, avg=81.77, stdev=11.40 00:21:59.871 lat (msec): min=20, max=144, avg=83.05, stdev=11.39 00:21:59.871 clat percentiles (msec): 00:21:59.871 | 1.00th=[ 73], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 75], 00:21:59.871 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 79], 60.00th=[ 79], 00:21:59.871 | 70.00th=[ 80], 80.00th=[ 83], 90.00th=[ 102], 95.00th=[ 113], 00:21:59.871 | 99.00th=[ 118], 99.50th=[ 125], 99.90th=[ 136], 99.95th=[ 140], 00:21:59.871 | 99.99th=[ 146] 00:21:59.871 bw ( KiB/s): min=137216, max=212480, per=11.70%, avg=196834.50, stdev=21926.54, samples=20 00:21:59.871 iops : min= 536, max= 830, avg=768.85, stdev=85.64, samples=20 00:21:59.871 lat (msec) : 50=0.21%, 100=89.30%, 250=10.50% 00:21:59.871 cpu : usr=1.60%, sys=2.25%, ctx=9108, majf=0, minf=1 00:21:59.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:59.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.871 issued rwts: total=0,7755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.871 job3: (groupid=0, jobs=1): err= 0: pid=102333: Mon Jul 15 13:20:55 2024 00:21:59.871 write: IOPS=562, BW=141MiB/s (148MB/s)(1421MiB/10099msec); 0 zone resets 00:21:59.871 slat (usec): min=24, max=13145, avg=1725.21, stdev=2995.86 00:21:59.871 clat (msec): min=6, max=216, avg=111.92, stdev=12.67 00:21:59.871 lat (msec): min=7, max=216, avg=113.65, stdev=12.50 00:21:59.871 clat percentiles (msec): 00:21:59.871 | 1.00th=[ 47], 5.00th=[ 104], 10.00th=[ 106], 20.00th=[ 109], 00:21:59.871 | 30.00th=[ 111], 40.00th=[ 112], 50.00th=[ 113], 60.00th=[ 114], 00:21:59.871 | 70.00th=[ 116], 80.00th=[ 117], 90.00th=[ 118], 95.00th=[ 121], 00:21:59.871 | 99.00th=[ 144], 99.50th=[ 167], 99.90th=[ 209], 99.95th=[ 211], 00:21:59.871 | 99.99th=[ 218] 00:21:59.871 bw ( KiB/s): min=137728, max=161792, per=8.55%, avg=143857.50, stdev=4854.47, samples=20 00:21:59.871 iops : min= 538, max= 632, avg=561.90, stdev=18.96, samples=20 00:21:59.871 lat (msec) : 10=0.04%, 20=0.32%, 50=0.72%, 100=1.50%, 250=97.43% 00:21:59.871 cpu : usr=1.27%, sys=1.59%, ctx=5548, majf=0, minf=1 00:21:59.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:59.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.871 issued rwts: total=0,5683,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.871 job4: (groupid=0, jobs=1): err= 0: pid=102334: Mon Jul 15 13:20:55 2024 00:21:59.871 write: IOPS=655, BW=164MiB/s (172MB/s)(1655MiB/10101msec); 0 zone resets 00:21:59.871 slat (usec): min=19, max=13525, avg=1504.78, stdev=2729.50 00:21:59.871 clat (msec): min=11, max=218, avg=96.09, stdev=31.42 00:21:59.871 lat (msec): min=11, max=218, avg=97.59, stdev=31.79 00:21:59.871 clat percentiles (msec): 00:21:59.871 | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 43], 00:21:59.871 | 30.00th=[ 106], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 113], 00:21:59.871 | 70.00th=[ 115], 80.00th=[ 116], 90.00th=[ 118], 95.00th=[ 120], 00:21:59.871 | 99.00th=[ 124], 99.50th=[ 161], 99.90th=[ 205], 99.95th=[ 211], 00:21:59.871 | 99.99th=[ 220] 00:21:59.871 bw ( KiB/s): min=137728, max=397540, per=9.97%, avg=167768.20, stdev=71940.92, samples=20 00:21:59.871 iops : min= 538, max= 1552, avg=655.30, stdev=280.87, samples=20 00:21:59.871 lat (msec) : 20=0.09%, 50=22.33%, 100=2.61%, 250=74.96% 00:21:59.871 cpu : usr=1.67%, sys=1.81%, ctx=8354, majf=0, minf=1 00:21:59.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:21:59.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.871 issued rwts: total=0,6618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.871 job5: (groupid=0, jobs=1): err= 0: pid=102335: Mon Jul 15 13:20:55 2024 00:21:59.871 write: IOPS=768, BW=192MiB/s (201MB/s)(1935MiB/10074msec); 0 zone resets 00:21:59.871 slat (usec): min=22, max=33238, avg=1286.34, stdev=2201.62 00:21:59.871 clat (msec): min=39, max=150, avg=81.89, stdev=11.43 00:21:59.871 lat (msec): min=39, max=150, avg=83.18, stdev=11.40 00:21:59.871 clat percentiles (msec): 00:21:59.871 | 1.00th=[ 73], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 75], 00:21:59.871 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 79], 60.00th=[ 79], 00:21:59.871 | 70.00th=[ 80], 80.00th=[ 83], 90.00th=[ 102], 95.00th=[ 113], 00:21:59.871 | 99.00th=[ 118], 99.50th=[ 127], 99.90th=[ 146], 99.95th=[ 148], 00:21:59.871 | 99.99th=[ 150] 00:21:59.871 bw ( KiB/s): min=129282, max=211456, per=11.68%, avg=196529.50, stdev=22778.62, samples=20 00:21:59.871 iops : min= 505, max= 826, avg=767.65, stdev=89.08, samples=20 00:21:59.871 lat (msec) : 50=0.10%, 100=89.28%, 250=10.62% 00:21:59.871 cpu : usr=1.75%, sys=2.46%, ctx=9540, majf=0, minf=1 00:21:59.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:59.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.871 issued rwts: total=0,7740,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.871 job6: (groupid=0, jobs=1): err= 0: pid=102336: Mon Jul 15 13:20:55 2024 00:21:59.871 write: IOPS=546, BW=137MiB/s (143MB/s)(1381MiB/10110msec); 0 zone resets 00:21:59.871 slat (usec): min=24, max=46061, avg=1779.91, stdev=3168.85 00:21:59.871 clat (msec): min=11, max=217, avg=115.30, stdev=16.42 00:21:59.871 lat (msec): min=11, max=217, avg=117.08, stdev=16.37 00:21:59.871 clat percentiles (msec): 00:21:59.871 | 1.00th=[ 60], 5.00th=[ 104], 10.00th=[ 107], 20.00th=[ 110], 00:21:59.871 | 30.00th=[ 111], 40.00th=[ 112], 50.00th=[ 114], 60.00th=[ 115], 00:21:59.871 | 70.00th=[ 116], 80.00th=[ 118], 90.00th=[ 123], 95.00th=[ 155], 00:21:59.871 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 211], 99.95th=[ 211], 00:21:59.871 | 99.99th=[ 218] 00:21:59.871 bw ( KiB/s): min=104448, max=147968, per=8.31%, avg=139765.25, stdev=11785.41, samples=20 00:21:59.871 iops : min= 408, max= 578, avg=545.95, stdev=46.06, samples=20 00:21:59.871 lat (msec) : 20=0.29%, 50=0.53%, 100=1.29%, 250=97.90% 00:21:59.871 cpu : usr=1.26%, sys=1.69%, ctx=4739, majf=0, minf=1 00:21:59.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:59.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.872 issued rwts: total=0,5523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.872 job7: (groupid=0, jobs=1): err= 0: pid=102337: Mon Jul 15 13:20:55 2024 00:21:59.872 write: IOPS=533, BW=133MiB/s (140MB/s)(1349MiB/10116msec); 0 zone resets 00:21:59.872 slat (usec): min=19, max=30384, avg=1847.53, stdev=3172.61 00:21:59.872 clat (msec): min=13, max=232, avg=118.09, stdev=14.81 00:21:59.872 lat (msec): min=13, max=233, avg=119.94, stdev=14.68 00:21:59.872 clat percentiles (msec): 00:21:59.872 | 1.00th=[ 107], 5.00th=[ 108], 10.00th=[ 109], 20.00th=[ 111], 00:21:59.872 | 30.00th=[ 114], 40.00th=[ 115], 50.00th=[ 116], 60.00th=[ 116], 00:21:59.872 | 70.00th=[ 117], 80.00th=[ 120], 90.00th=[ 128], 95.00th=[ 155], 00:21:59.872 | 99.00th=[ 165], 99.50th=[ 184], 99.90th=[ 226], 99.95th=[ 226], 00:21:59.872 | 99.99th=[ 234] 00:21:59.872 bw ( KiB/s): min=104448, max=143872, per=8.11%, avg=136524.80, stdev=11221.39, samples=20 00:21:59.872 iops : min= 408, max= 562, avg=533.30, stdev=43.83, samples=20 00:21:59.872 lat (msec) : 20=0.07%, 50=0.30%, 100=0.44%, 250=99.18% 00:21:59.872 cpu : usr=1.38%, sys=1.37%, ctx=7347, majf=0, minf=1 00:21:59.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:59.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.872 issued rwts: total=0,5396,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.872 job8: (groupid=0, jobs=1): err= 0: pid=102338: Mon Jul 15 13:20:55 2024 00:21:59.872 write: IOPS=540, BW=135MiB/s (142MB/s)(1368MiB/10123msec); 0 zone resets 00:21:59.872 slat (usec): min=22, max=20119, avg=1811.38, stdev=3132.65 00:21:59.872 clat (msec): min=4, max=232, avg=116.52, stdev=16.64 00:21:59.872 lat (msec): min=4, max=233, avg=118.33, stdev=16.61 00:21:59.872 clat percentiles (msec): 00:21:59.872 | 1.00th=[ 41], 5.00th=[ 108], 10.00th=[ 109], 20.00th=[ 111], 00:21:59.872 | 30.00th=[ 114], 40.00th=[ 115], 50.00th=[ 115], 60.00th=[ 116], 00:21:59.872 | 70.00th=[ 117], 80.00th=[ 120], 90.00th=[ 128], 95.00th=[ 148], 00:21:59.872 | 99.00th=[ 159], 99.50th=[ 184], 99.90th=[ 226], 99.95th=[ 226], 00:21:59.872 | 99.99th=[ 234] 00:21:59.872 bw ( KiB/s): min=106283, max=143360, per=8.23%, avg=138459.75, stdev=8319.91, samples=20 00:21:59.872 iops : min= 415, max= 560, avg=540.85, stdev=32.53, samples=20 00:21:59.872 lat (msec) : 10=0.15%, 20=0.33%, 50=0.69%, 100=1.24%, 250=97.59% 00:21:59.872 cpu : usr=1.35%, sys=1.37%, ctx=6682, majf=0, minf=1 00:21:59.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:59.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.872 issued rwts: total=0,5472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.872 job9: (groupid=0, jobs=1): err= 0: pid=102339: Mon Jul 15 13:20:55 2024 00:21:59.872 write: IOPS=538, BW=135MiB/s (141MB/s)(1362MiB/10119msec); 0 zone resets 00:21:59.872 slat (usec): min=25, max=25739, avg=1806.77, stdev=3123.56 00:21:59.872 clat (msec): min=5, max=232, avg=116.93, stdev=15.81 00:21:59.872 lat (msec): min=5, max=233, avg=118.73, stdev=15.73 00:21:59.872 clat percentiles (msec): 00:21:59.872 | 1.00th=[ 52], 5.00th=[ 108], 10.00th=[ 109], 20.00th=[ 111], 00:21:59.872 | 30.00th=[ 114], 40.00th=[ 115], 50.00th=[ 115], 60.00th=[ 116], 00:21:59.872 | 70.00th=[ 117], 80.00th=[ 120], 90.00th=[ 127], 95.00th=[ 148], 00:21:59.872 | 99.00th=[ 159], 99.50th=[ 184], 99.90th=[ 226], 99.95th=[ 226], 00:21:59.872 | 99.99th=[ 234] 00:21:59.872 bw ( KiB/s): min=104448, max=143872, per=8.20%, avg=137881.60, stdev=8731.85, samples=20 00:21:59.872 iops : min= 408, max= 562, avg=538.60, stdev=34.11, samples=20 00:21:59.872 lat (msec) : 10=0.06%, 20=0.22%, 50=0.72%, 100=0.81%, 250=98.20% 00:21:59.872 cpu : usr=1.24%, sys=1.68%, ctx=8306, majf=0, minf=1 00:21:59.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:59.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.872 issued rwts: total=0,5449,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.872 job10: (groupid=0, jobs=1): err= 0: pid=102340: Mon Jul 15 13:20:55 2024 00:21:59.872 write: IOPS=557, BW=139MiB/s (146MB/s)(1408MiB/10108msec); 0 zone resets 00:21:59.872 slat (usec): min=23, max=10271, avg=1771.09, stdev=2998.84 00:21:59.872 clat (msec): min=10, max=221, avg=113.02, stdev= 9.88 00:21:59.872 lat (msec): min=10, max=221, avg=114.80, stdev= 9.55 00:21:59.872 clat percentiles (msec): 00:21:59.872 | 1.00th=[ 105], 5.00th=[ 107], 10.00th=[ 107], 20.00th=[ 108], 00:21:59.872 | 30.00th=[ 113], 40.00th=[ 113], 50.00th=[ 114], 60.00th=[ 114], 00:21:59.872 | 70.00th=[ 114], 80.00th=[ 115], 90.00th=[ 118], 95.00th=[ 122], 00:21:59.872 | 99.00th=[ 131], 99.50th=[ 171], 99.90th=[ 215], 99.95th=[ 215], 00:21:59.872 | 99.99th=[ 222] 00:21:59.872 bw ( KiB/s): min=131072, max=145920, per=8.48%, avg=142606.35, stdev=3738.47, samples=20 00:21:59.872 iops : min= 512, max= 570, avg=557.05, stdev=14.60, samples=20 00:21:59.872 lat (msec) : 20=0.14%, 50=0.28%, 100=0.50%, 250=99.08% 00:21:59.872 cpu : usr=1.42%, sys=1.32%, ctx=6784, majf=0, minf=1 00:21:59.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:59.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:59.872 issued rwts: total=0,5633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.872 00:21:59.872 Run status group 0 (all jobs): 00:21:59.872 WRITE: bw=1643MiB/s (1723MB/s), 133MiB/s-193MiB/s (140MB/s-202MB/s), io=16.2GiB (17.4GB), run=10068-10123msec 00:21:59.872 00:21:59.872 Disk stats (read/write): 00:21:59.872 nvme0n1: ios=49/11089, merge=0/0, ticks=47/1210483, in_queue=1210530, util=97.66% 00:21:59.872 nvme10n1: ios=49/11091, merge=0/0, ticks=28/1210218, in_queue=1210246, util=97.59% 00:21:59.872 nvme1n1: ios=15/15309, merge=0/0, ticks=23/1211446, in_queue=1211469, util=97.65% 00:21:59.872 nvme2n1: ios=22/11200, merge=0/0, ticks=10/1210731, in_queue=1210741, util=97.89% 00:21:59.872 nvme3n1: ios=0/13080, merge=0/0, ticks=0/1210559, in_queue=1210559, util=97.83% 00:21:59.872 nvme4n1: ios=0/15293, merge=0/0, ticks=0/1211373, in_queue=1211373, util=98.06% 00:21:59.872 nvme5n1: ios=0/10885, merge=0/0, ticks=0/1211926, in_queue=1211926, util=98.33% 00:21:59.872 nvme6n1: ios=0/10609, merge=0/0, ticks=0/1208809, in_queue=1208809, util=98.15% 00:21:59.872 nvme7n1: ios=0/10777, merge=0/0, ticks=0/1211674, in_queue=1211674, util=98.59% 00:21:59.872 nvme8n1: ios=0/10727, merge=0/0, ticks=0/1210316, in_queue=1210316, util=98.65% 00:21:59.872 nvme9n1: ios=0/11096, merge=0/0, ticks=0/1210492, in_queue=1210492, util=98.78% 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:59.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:21:59.872 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:21:59.872 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:59.872 13:20:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:21:59.872 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:21:59.872 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:21:59.872 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:21:59.872 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:21:59.872 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:21:59.872 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.872 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:00.130 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:00.130 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:00.130 13:20:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:00.130 rmmod nvme_tcp 00:22:00.387 rmmod nvme_fabrics 00:22:00.387 rmmod nvme_keyring 00:22:00.387 13:20:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:00.387 13:20:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:22:00.387 13:20:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:22:00.387 13:20:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 101638 ']' 00:22:00.387 13:20:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 101638 00:22:00.387 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 101638 ']' 00:22:00.387 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 101638 00:22:00.387 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:22:00.387 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:00.387 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101638 00:22:00.387 killing process with pid 101638 00:22:00.387 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:00.387 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:00.387 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101638' 00:22:00.387 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 101638 00:22:00.387 13:20:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 101638 00:22:00.954 13:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:00.955 13:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:00.955 13:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:00.955 13:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:00.955 13:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:00.955 13:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.955 13:20:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:00.955 13:20:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.955 13:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:00.955 00:22:00.955 real 0m49.570s 00:22:00.955 user 2m45.831s 00:22:00.955 sys 0m25.133s 00:22:00.955 ************************************ 00:22:00.955 END TEST nvmf_multiconnection 00:22:00.955 ************************************ 00:22:00.955 13:20:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:00.955 13:20:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:00.955 13:20:57 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:00.955 13:20:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:00.955 13:20:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:00.955 13:20:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:00.955 ************************************ 00:22:00.955 START TEST nvmf_initiator_timeout 00:22:00.955 ************************************ 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:00.955 * Looking for test storage... 00:22:00.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:00.955 Cannot find device "nvmf_tgt_br" 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:22:00.955 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:01.214 Cannot find device "nvmf_tgt_br2" 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:01.214 Cannot find device "nvmf_tgt_br" 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:01.214 Cannot find device "nvmf_tgt_br2" 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:01.214 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:01.214 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:01.214 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:01.471 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:01.471 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:01.471 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:01.471 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:01.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:22:01.471 00:22:01.471 --- 10.0.0.2 ping statistics --- 00:22:01.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.471 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:22:01.471 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:01.471 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:01.471 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:22:01.471 00:22:01.471 --- 10.0.0.3 ping statistics --- 00:22:01.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.471 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:22:01.471 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:01.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:22:01.471 00:22:01.471 --- 10.0.0.1 ping statistics --- 00:22:01.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.471 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:22:01.471 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.471 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:22:01.471 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:01.471 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.471 13:20:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:01.471 13:20:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:01.471 13:20:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.471 13:20:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:01.471 13:20:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:01.471 13:20:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:01.471 13:20:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:01.471 13:20:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:01.471 13:20:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:01.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.471 13:20:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=102708 00:22:01.471 13:20:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 102708 00:22:01.471 13:20:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 102708 ']' 00:22:01.471 13:20:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.471 13:20:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:01.471 13:20:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:01.471 13:20:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.471 13:20:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:01.471 13:20:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:01.471 [2024-07-15 13:20:58.121265] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:01.471 [2024-07-15 13:20:58.121405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.729 [2024-07-15 13:20:58.265434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:01.729 [2024-07-15 13:20:58.412222] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.729 [2024-07-15 13:20:58.412612] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.729 [2024-07-15 13:20:58.412793] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.729 [2024-07-15 13:20:58.412950] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.729 [2024-07-15 13:20:58.413002] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.729 [2024-07-15 13:20:58.413593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.729 [2024-07-15 13:20:58.413877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.729 [2024-07-15 13:20:58.413790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.729 [2024-07-15 13:20:58.413869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:02.661 Malloc0 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:02.661 Delay0 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:02.661 [2024-07-15 13:20:59.262709] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:02.661 [2024-07-15 13:20:59.300080] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.661 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:02.918 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:02.918 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:22:02.918 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:22:02.918 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:22:02.918 13:20:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:22:04.885 13:21:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:04.885 13:21:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:04.885 13:21:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:22:04.885 13:21:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:22:04.885 13:21:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:04.885 13:21:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:22:04.885 13:21:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=102787 00:22:04.885 13:21:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:04.885 13:21:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:04.885 [global] 00:22:04.885 thread=1 00:22:04.885 invalidate=1 00:22:04.885 rw=write 00:22:04.885 time_based=1 00:22:04.885 runtime=60 00:22:04.885 ioengine=libaio 00:22:04.885 direct=1 00:22:04.885 bs=4096 00:22:04.885 iodepth=1 00:22:04.885 norandommap=0 00:22:04.885 numjobs=1 00:22:04.885 00:22:04.885 verify_dump=1 00:22:04.885 verify_backlog=512 00:22:04.885 verify_state_save=0 00:22:04.885 do_verify=1 00:22:04.885 verify=crc32c-intel 00:22:04.885 [job0] 00:22:04.885 filename=/dev/nvme0n1 00:22:04.885 Could not set queue depth (nvme0n1) 00:22:05.146 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:05.146 fio-3.35 00:22:05.146 Starting 1 thread 00:22:08.451 13:21:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:08.451 13:21:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.451 13:21:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:08.451 true 00:22:08.451 13:21:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.451 13:21:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:08.451 13:21:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.451 13:21:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:08.451 true 00:22:08.451 13:21:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.451 13:21:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:08.451 13:21:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.451 13:21:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:08.451 true 00:22:08.451 13:21:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.451 13:21:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:08.451 13:21:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.451 13:21:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:08.451 true 00:22:08.451 13:21:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.451 13:21:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:22:10.976 13:21:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:22:10.976 13:21:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.976 13:21:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:10.976 true 00:22:10.976 13:21:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.976 13:21:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:22:10.976 13:21:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.976 13:21:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:10.976 true 00:22:10.976 13:21:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.976 13:21:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:22:10.976 13:21:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.976 13:21:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:10.976 true 00:22:10.976 13:21:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.976 13:21:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:22:10.976 13:21:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.976 13:21:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:10.976 true 00:22:10.976 13:21:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.976 13:21:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:22:10.976 13:21:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 102787 00:23:07.260 00:23:07.260 job0: (groupid=0, jobs=1): err= 0: pid=102808: Mon Jul 15 13:22:01 2024 00:23:07.260 read: IOPS=844, BW=3379KiB/s (3460kB/s)(198MiB/60000msec) 00:23:07.260 slat (usec): min=13, max=19912, avg=16.68, stdev=94.64 00:23:07.260 clat (usec): min=100, max=40702k, avg=991.89, stdev=180785.20 00:23:07.260 lat (usec): min=183, max=40702k, avg=1008.57, stdev=180785.22 00:23:07.260 clat percentiles (usec): 00:23:07.260 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 182], 00:23:07.260 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:23:07.260 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 200], 95.00th=[ 206], 00:23:07.260 | 99.00th=[ 221], 99.50th=[ 231], 99.90th=[ 322], 99.95th=[ 359], 00:23:07.260 | 99.99th=[ 474] 00:23:07.260 write: IOPS=850, BW=3402KiB/s (3483kB/s)(199MiB/60000msec); 0 zone resets 00:23:07.260 slat (usec): min=19, max=649, avg=23.40, stdev= 6.28 00:23:07.260 clat (usec): min=107, max=1911, avg=147.00, stdev=16.41 00:23:07.260 lat (usec): min=151, max=1937, avg=170.40, stdev=17.98 00:23:07.260 clat percentiles (usec): 00:23:07.260 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:23:07.260 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 145], 60.00th=[ 147], 00:23:07.260 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 157], 95.00th=[ 161], 00:23:07.260 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 277], 99.95th=[ 355], 00:23:07.260 | 99.99th=[ 906] 00:23:07.260 bw ( KiB/s): min= 4360, max=12288, per=100.00%, avg=10477.26, stdev=1670.73, samples=38 00:23:07.261 iops : min= 1090, max= 3072, avg=2619.32, stdev=417.68, samples=38 00:23:07.261 lat (usec) : 250=99.84%, 500=0.15%, 750=0.01%, 1000=0.01% 00:23:07.261 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:23:07.261 cpu : usr=0.67%, sys=2.43%, ctx=101754, majf=0, minf=2 00:23:07.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:07.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.261 issued rwts: total=50688,51023,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:07.261 00:23:07.261 Run status group 0 (all jobs): 00:23:07.261 READ: bw=3379KiB/s (3460kB/s), 3379KiB/s-3379KiB/s (3460kB/s-3460kB/s), io=198MiB (208MB), run=60000-60000msec 00:23:07.261 WRITE: bw=3402KiB/s (3483kB/s), 3402KiB/s-3402KiB/s (3483kB/s-3483kB/s), io=199MiB (209MB), run=60000-60000msec 00:23:07.261 00:23:07.261 Disk stats (read/write): 00:23:07.261 nvme0n1: ios=50767/50688, merge=0/0, ticks=9981/7992, in_queue=17973, util=99.83% 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:07.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:07.261 nvmf hotplug test: fio successful as expected 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:07.261 rmmod nvme_tcp 00:23:07.261 rmmod nvme_fabrics 00:23:07.261 rmmod nvme_keyring 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 102708 ']' 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 102708 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 102708 ']' 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 102708 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 102708 00:23:07.261 killing process with pid 102708 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 102708' 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 102708 00:23:07.261 13:22:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 102708 00:23:07.261 13:22:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:07.261 13:22:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:07.261 13:22:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:07.261 13:22:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:07.261 13:22:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:07.261 13:22:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.261 13:22:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.261 13:22:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.261 13:22:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:07.261 00:23:07.261 real 1m4.669s 00:23:07.261 user 4m5.557s 00:23:07.261 sys 0m9.696s 00:23:07.261 13:22:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:07.261 13:22:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:07.261 ************************************ 00:23:07.261 END TEST nvmf_initiator_timeout 00:23:07.261 ************************************ 00:23:07.261 13:22:02 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:23:07.261 13:22:02 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:07.261 13:22:02 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.261 13:22:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:07.261 13:22:02 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:07.261 13:22:02 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:07.261 13:22:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:07.261 13:22:02 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:07.261 13:22:02 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:07.261 13:22:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:07.261 13:22:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:07.261 13:22:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:07.261 ************************************ 00:23:07.261 START TEST nvmf_multicontroller 00:23:07.261 ************************************ 00:23:07.261 13:22:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:07.261 * Looking for test storage... 00:23:07.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:07.261 13:22:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:07.261 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:07.261 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.261 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.261 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.261 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.261 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.261 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.261 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.261 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.261 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.261 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.261 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:07.262 Cannot find device "nvmf_tgt_br" 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:07.262 Cannot find device "nvmf_tgt_br2" 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:07.262 Cannot find device "nvmf_tgt_br" 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:07.262 Cannot find device "nvmf_tgt_br2" 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:07.262 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:07.262 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:07.262 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:07.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:23:07.263 00:23:07.263 --- 10.0.0.2 ping statistics --- 00:23:07.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.263 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:07.263 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:07.263 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:23:07.263 00:23:07.263 --- 10.0.0.3 ping statistics --- 00:23:07.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.263 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:07.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:23:07.263 00:23:07.263 --- 10.0.0.1 ping statistics --- 00:23:07.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.263 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=103623 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 103623 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 103623 ']' 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:07.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:07.263 13:22:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.263 [2024-07-15 13:22:02.821855] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:07.263 [2024-07-15 13:22:02.821954] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.263 [2024-07-15 13:22:02.962853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:07.263 [2024-07-15 13:22:03.068295] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.263 [2024-07-15 13:22:03.068364] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.263 [2024-07-15 13:22:03.068378] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.263 [2024-07-15 13:22:03.068389] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.263 [2024-07-15 13:22:03.068399] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.263 [2024-07-15 13:22:03.069237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.263 [2024-07-15 13:22:03.069423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.263 [2024-07-15 13:22:03.069432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.263 [2024-07-15 13:22:03.899779] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.263 Malloc0 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.263 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.264 13:22:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:07.264 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.264 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.264 [2024-07-15 13:22:03.961526] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.264 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.264 13:22:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:07.264 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.264 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.264 [2024-07-15 13:22:03.969486] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:07.264 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.264 13:22:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:07.264 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.264 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.264 Malloc1 00:23:07.264 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.264 13:22:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:07.264 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.264 13:22:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=103675 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 103675 /var/tmp/bdevperf.sock 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 103675 ']' 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:07.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:07.521 13:22:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.457 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:08.457 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:23:08.457 13:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:08.457 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.457 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.716 NVMe0n1 00:23:08.716 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.716 13:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:08.716 13:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:08.716 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.716 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.716 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.716 1 00:23:08.716 13:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:08.716 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:08.716 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:08.716 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:08.716 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.716 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:08.716 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.716 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:08.716 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.716 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.716 2024/07/15 13:22:05 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:23:08.716 request: 00:23:08.716 { 00:23:08.716 "method": "bdev_nvme_attach_controller", 00:23:08.716 "params": { 00:23:08.716 "name": "NVMe0", 00:23:08.716 "trtype": "tcp", 00:23:08.716 "traddr": "10.0.0.2", 00:23:08.716 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:08.716 "hostaddr": "10.0.0.2", 00:23:08.716 "hostsvcid": "60000", 00:23:08.716 "adrfam": "ipv4", 00:23:08.716 "trsvcid": "4420", 00:23:08.716 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:23:08.716 } 00:23:08.716 } 00:23:08.716 Got JSON-RPC error response 00:23:08.716 GoRPCClient: error on JSON-RPC call 00:23:08.716 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:08.716 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:08.716 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:08.716 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:08.716 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.717 2024/07/15 13:22:05 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:23:08.717 request: 00:23:08.717 { 00:23:08.717 "method": "bdev_nvme_attach_controller", 00:23:08.717 "params": { 00:23:08.717 "name": "NVMe0", 00:23:08.717 "trtype": "tcp", 00:23:08.717 "traddr": "10.0.0.2", 00:23:08.717 "hostaddr": "10.0.0.2", 00:23:08.717 "hostsvcid": "60000", 00:23:08.717 "adrfam": "ipv4", 00:23:08.717 "trsvcid": "4420", 00:23:08.717 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:23:08.717 } 00:23:08.717 } 00:23:08.717 Got JSON-RPC error response 00:23:08.717 GoRPCClient: error on JSON-RPC call 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.717 2024/07/15 13:22:05 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:23:08.717 request: 00:23:08.717 { 00:23:08.717 "method": "bdev_nvme_attach_controller", 00:23:08.717 "params": { 00:23:08.717 "name": "NVMe0", 00:23:08.717 "trtype": "tcp", 00:23:08.717 "traddr": "10.0.0.2", 00:23:08.717 "hostaddr": "10.0.0.2", 00:23:08.717 "hostsvcid": "60000", 00:23:08.717 "adrfam": "ipv4", 00:23:08.717 "trsvcid": "4420", 00:23:08.717 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.717 "multipath": "disable" 00:23:08.717 } 00:23:08.717 } 00:23:08.717 Got JSON-RPC error response 00:23:08.717 GoRPCClient: error on JSON-RPC call 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.717 2024/07/15 13:22:05 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:23:08.717 request: 00:23:08.717 { 00:23:08.717 "method": "bdev_nvme_attach_controller", 00:23:08.717 "params": { 00:23:08.717 "name": "NVMe0", 00:23:08.717 "trtype": "tcp", 00:23:08.717 "traddr": "10.0.0.2", 00:23:08.717 "hostaddr": "10.0.0.2", 00:23:08.717 "hostsvcid": "60000", 00:23:08.717 "adrfam": "ipv4", 00:23:08.717 "trsvcid": "4420", 00:23:08.717 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.717 "multipath": "failover" 00:23:08.717 } 00:23:08.717 } 00:23:08.717 Got JSON-RPC error response 00:23:08.717 GoRPCClient: error on JSON-RPC call 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.717 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.717 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.977 00:23:08.977 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.977 13:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:08.977 13:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:08.977 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.977 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.977 13:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.977 13:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:08.977 13:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:09.936 0 00:23:09.936 13:22:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:09.936 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.936 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.936 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.936 13:22:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 103675 00:23:09.936 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 103675 ']' 00:23:09.936 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 103675 00:23:09.936 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:23:09.936 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:09.936 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 103675 00:23:09.936 killing process with pid 103675 00:23:09.936 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:09.936 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:09.936 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 103675' 00:23:09.936 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 103675 00:23:09.936 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 103675 00:23:10.194 13:22:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:10.194 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.194 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.194 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.194 13:22:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:10.194 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.194 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.194 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.194 13:22:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:10.194 13:22:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:10.194 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:23:10.194 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:23:10.194 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:23:10.194 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:23:10.194 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:23:10.194 [2024-07-15 13:22:04.095707] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:10.194 [2024-07-15 13:22:04.095988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103675 ] 00:23:10.194 [2024-07-15 13:22:04.238068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.194 [2024-07-15 13:22:04.348520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.194 [2024-07-15 13:22:05.457698] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 2020d638-9590-4531-b113-d70113f9e372 already exists 00:23:10.194 [2024-07-15 13:22:05.457764] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:2020d638-9590-4531-b113-d70113f9e372 alias for bdev NVMe1n1 00:23:10.194 [2024-07-15 13:22:05.457787] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:10.194 Running I/O for 1 seconds... 00:23:10.194 00:23:10.194 Latency(us) 00:23:10.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.194 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:10.194 NVMe0n1 : 1.00 19215.08 75.06 0.00 0.00 6650.63 2085.24 11260.28 00:23:10.194 =================================================================================================================== 00:23:10.194 Total : 19215.08 75.06 0.00 0.00 6650.63 2085.24 11260.28 00:23:10.194 Received shutdown signal, test time was about 1.000000 seconds 00:23:10.194 00:23:10.194 Latency(us) 00:23:10.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.194 =================================================================================================================== 00:23:10.194 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.194 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:23:10.194 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:10.194 13:22:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:23:10.194 13:22:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:10.194 13:22:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:10.194 13:22:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:10.451 13:22:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:10.451 13:22:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:10.451 13:22:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:10.451 13:22:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:10.451 rmmod nvme_tcp 00:23:10.451 rmmod nvme_fabrics 00:23:10.451 rmmod nvme_keyring 00:23:10.451 13:22:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:10.451 13:22:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:10.451 13:22:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:10.451 13:22:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 103623 ']' 00:23:10.451 13:22:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 103623 00:23:10.451 13:22:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 103623 ']' 00:23:10.451 13:22:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 103623 00:23:10.452 13:22:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:23:10.452 13:22:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:10.452 13:22:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 103623 00:23:10.452 killing process with pid 103623 00:23:10.452 13:22:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:10.452 13:22:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:10.452 13:22:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 103623' 00:23:10.452 13:22:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 103623 00:23:10.452 13:22:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 103623 00:23:10.709 13:22:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:10.709 13:22:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:10.709 13:22:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:10.709 13:22:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:10.709 13:22:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:10.709 13:22:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.709 13:22:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.709 13:22:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.709 13:22:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:10.709 00:23:10.709 real 0m5.048s 00:23:10.709 user 0m16.002s 00:23:10.709 sys 0m1.157s 00:23:10.709 13:22:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:10.709 13:22:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.709 ************************************ 00:23:10.709 END TEST nvmf_multicontroller 00:23:10.709 ************************************ 00:23:10.709 13:22:07 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:10.709 13:22:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:10.709 13:22:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:10.709 13:22:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:10.709 ************************************ 00:23:10.709 START TEST nvmf_aer 00:23:10.709 ************************************ 00:23:10.709 13:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:10.967 * Looking for test storage... 00:23:10.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:10.968 Cannot find device "nvmf_tgt_br" 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:10.968 Cannot find device "nvmf_tgt_br2" 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:10.968 Cannot find device "nvmf_tgt_br" 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:10.968 Cannot find device "nvmf_tgt_br2" 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:10.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:10.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:10.968 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:11.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:23:11.226 00:23:11.226 --- 10.0.0.2 ping statistics --- 00:23:11.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.226 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:11.226 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:11.226 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:23:11.226 00:23:11.226 --- 10.0.0.3 ping statistics --- 00:23:11.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.226 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:11.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:23:11.226 00:23:11.226 --- 10.0.0.1 ping statistics --- 00:23:11.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.226 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=103929 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 103929 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 103929 ']' 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:11.226 13:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.226 [2024-07-15 13:22:07.948335] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:11.226 [2024-07-15 13:22:07.948441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.482 [2024-07-15 13:22:08.088737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:11.483 [2024-07-15 13:22:08.195647] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.483 [2024-07-15 13:22:08.195952] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.483 [2024-07-15 13:22:08.196172] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.483 [2024-07-15 13:22:08.196337] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.483 [2024-07-15 13:22:08.196381] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.483 [2024-07-15 13:22:08.196611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.483 [2024-07-15 13:22:08.196919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.483 [2024-07-15 13:22:08.197970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:11.483 [2024-07-15 13:22:08.198023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.414 13:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:12.414 13:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:23:12.414 13:22:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:12.414 13:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.414 13:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.414 13:22:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.414 13:22:08 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:12.414 13:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.414 13:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.414 [2024-07-15 13:22:09.008413] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.414 Malloc0 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.414 [2024-07-15 13:22:09.077091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.414 [ 00:23:12.414 { 00:23:12.414 "allow_any_host": true, 00:23:12.414 "hosts": [], 00:23:12.414 "listen_addresses": [], 00:23:12.414 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:12.414 "subtype": "Discovery" 00:23:12.414 }, 00:23:12.414 { 00:23:12.414 "allow_any_host": true, 00:23:12.414 "hosts": [], 00:23:12.414 "listen_addresses": [ 00:23:12.414 { 00:23:12.414 "adrfam": "IPv4", 00:23:12.414 "traddr": "10.0.0.2", 00:23:12.414 "trsvcid": "4420", 00:23:12.414 "trtype": "TCP" 00:23:12.414 } 00:23:12.414 ], 00:23:12.414 "max_cntlid": 65519, 00:23:12.414 "max_namespaces": 2, 00:23:12.414 "min_cntlid": 1, 00:23:12.414 "model_number": "SPDK bdev Controller", 00:23:12.414 "namespaces": [ 00:23:12.414 { 00:23:12.414 "bdev_name": "Malloc0", 00:23:12.414 "name": "Malloc0", 00:23:12.414 "nguid": "7CC5179ABE5C4D0296FB8DEAD355CF0A", 00:23:12.414 "nsid": 1, 00:23:12.414 "uuid": "7cc5179a-be5c-4d02-96fb-8dead355cf0a" 00:23:12.414 } 00:23:12.414 ], 00:23:12.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.414 "serial_number": "SPDK00000000000001", 00:23:12.414 "subtype": "NVMe" 00:23:12.414 } 00:23:12.414 ] 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=103983 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:23:12.414 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:23:12.415 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:23:12.694 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:12.694 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:23:12.694 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:23:12.694 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:23:12.694 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:12.694 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:12.694 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:23:12.694 13:22:09 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:12.694 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.694 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.694 Malloc1 00:23:12.694 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.694 13:22:09 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:12.694 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.694 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.694 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.694 13:22:09 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:12.694 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.694 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.694 Asynchronous Event Request test 00:23:12.694 Attaching to 10.0.0.2 00:23:12.694 Attached to 10.0.0.2 00:23:12.694 Registering asynchronous event callbacks... 00:23:12.694 Starting namespace attribute notice tests for all controllers... 00:23:12.694 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:12.694 aer_cb - Changed Namespace 00:23:12.694 Cleaning up... 00:23:12.694 [ 00:23:12.694 { 00:23:12.694 "allow_any_host": true, 00:23:12.694 "hosts": [], 00:23:12.694 "listen_addresses": [], 00:23:12.694 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:12.694 "subtype": "Discovery" 00:23:12.694 }, 00:23:12.694 { 00:23:12.694 "allow_any_host": true, 00:23:12.694 "hosts": [], 00:23:12.694 "listen_addresses": [ 00:23:12.694 { 00:23:12.694 "adrfam": "IPv4", 00:23:12.694 "traddr": "10.0.0.2", 00:23:12.694 "trsvcid": "4420", 00:23:12.694 "trtype": "TCP" 00:23:12.694 } 00:23:12.694 ], 00:23:12.694 "max_cntlid": 65519, 00:23:12.694 "max_namespaces": 2, 00:23:12.694 "min_cntlid": 1, 00:23:12.694 "model_number": "SPDK bdev Controller", 00:23:12.694 "namespaces": [ 00:23:12.694 { 00:23:12.694 "bdev_name": "Malloc0", 00:23:12.694 "name": "Malloc0", 00:23:12.694 "nguid": "7CC5179ABE5C4D0296FB8DEAD355CF0A", 00:23:12.694 "nsid": 1, 00:23:12.694 "uuid": "7cc5179a-be5c-4d02-96fb-8dead355cf0a" 00:23:12.694 }, 00:23:12.694 { 00:23:12.694 "bdev_name": "Malloc1", 00:23:12.694 "name": "Malloc1", 00:23:12.694 "nguid": "C34DE23CFF934503B94A75035678B8CB", 00:23:12.694 "nsid": 2, 00:23:12.694 "uuid": "c34de23c-ff93-4503-b94a-75035678b8cb" 00:23:12.694 } 00:23:12.694 ], 00:23:12.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.694 "serial_number": "SPDK00000000000001", 00:23:12.695 "subtype": "NVMe" 00:23:12.695 } 00:23:12.695 ] 00:23:12.695 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.695 13:22:09 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 103983 00:23:12.695 13:22:09 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:12.695 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.695 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:12.953 rmmod nvme_tcp 00:23:12.953 rmmod nvme_fabrics 00:23:12.953 rmmod nvme_keyring 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 103929 ']' 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 103929 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 103929 ']' 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 103929 00:23:12.953 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:23:12.954 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:12.954 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 103929 00:23:12.954 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:12.954 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:12.954 killing process with pid 103929 00:23:12.954 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 103929' 00:23:12.954 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 103929 00:23:12.954 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 103929 00:23:13.218 13:22:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:13.218 13:22:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:13.218 13:22:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:13.218 13:22:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:13.218 13:22:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:13.218 13:22:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.218 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.218 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.218 13:22:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:13.218 00:23:13.218 real 0m2.470s 00:23:13.218 user 0m6.794s 00:23:13.218 sys 0m0.686s 00:23:13.218 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:13.218 13:22:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:13.218 ************************************ 00:23:13.218 END TEST nvmf_aer 00:23:13.218 ************************************ 00:23:13.218 13:22:09 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:13.218 13:22:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:13.218 13:22:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:13.218 13:22:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:13.218 ************************************ 00:23:13.218 START TEST nvmf_async_init 00:23:13.218 ************************************ 00:23:13.218 13:22:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:13.492 * Looking for test storage... 00:23:13.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=cadbcf7249c74f74a40b9f80b90d8c33 00:23:13.492 13:22:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:13.493 Cannot find device "nvmf_tgt_br" 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:13.493 Cannot find device "nvmf_tgt_br2" 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:13.493 Cannot find device "nvmf_tgt_br" 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:13.493 Cannot find device "nvmf_tgt_br2" 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:13.493 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:13.493 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:13.493 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:13.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:23:13.751 00:23:13.751 --- 10.0.0.2 ping statistics --- 00:23:13.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.751 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:13.751 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:13.751 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:23:13.751 00:23:13.751 --- 10.0.0.3 ping statistics --- 00:23:13.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.751 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:13.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:23:13.751 00:23:13.751 --- 10.0.0.1 ping statistics --- 00:23:13.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.751 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=104161 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 104161 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 104161 ']' 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:13.751 13:22:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.752 13:22:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:13.752 13:22:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.752 [2024-07-15 13:22:10.435887] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:13.752 [2024-07-15 13:22:10.435975] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.010 [2024-07-15 13:22:10.572853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.010 [2024-07-15 13:22:10.670143] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.010 [2024-07-15 13:22:10.670195] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.010 [2024-07-15 13:22:10.670218] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.010 [2024-07-15 13:22:10.670228] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.010 [2024-07-15 13:22:10.670235] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.010 [2024-07-15 13:22:10.670261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.943 [2024-07-15 13:22:11.447829] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.943 null0 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g cadbcf7249c74f74a40b9f80b90d8c33 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.943 [2024-07-15 13:22:11.487965] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.943 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.201 nvme0n1 00:23:15.201 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.201 13:22:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:15.201 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.201 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.201 [ 00:23:15.201 { 00:23:15.201 "aliases": [ 00:23:15.201 "cadbcf72-49c7-4f74-a40b-9f80b90d8c33" 00:23:15.201 ], 00:23:15.201 "assigned_rate_limits": { 00:23:15.201 "r_mbytes_per_sec": 0, 00:23:15.201 "rw_ios_per_sec": 0, 00:23:15.201 "rw_mbytes_per_sec": 0, 00:23:15.201 "w_mbytes_per_sec": 0 00:23:15.201 }, 00:23:15.201 "block_size": 512, 00:23:15.201 "claimed": false, 00:23:15.201 "driver_specific": { 00:23:15.201 "mp_policy": "active_passive", 00:23:15.201 "nvme": [ 00:23:15.201 { 00:23:15.201 "ctrlr_data": { 00:23:15.201 "ana_reporting": false, 00:23:15.201 "cntlid": 1, 00:23:15.201 "firmware_revision": "24.05.1", 00:23:15.201 "model_number": "SPDK bdev Controller", 00:23:15.201 "multi_ctrlr": true, 00:23:15.201 "oacs": { 00:23:15.201 "firmware": 0, 00:23:15.201 "format": 0, 00:23:15.201 "ns_manage": 0, 00:23:15.201 "security": 0 00:23:15.201 }, 00:23:15.201 "serial_number": "00000000000000000000", 00:23:15.201 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:15.201 "vendor_id": "0x8086" 00:23:15.201 }, 00:23:15.201 "ns_data": { 00:23:15.201 "can_share": true, 00:23:15.201 "id": 1 00:23:15.201 }, 00:23:15.201 "trid": { 00:23:15.201 "adrfam": "IPv4", 00:23:15.201 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:15.201 "traddr": "10.0.0.2", 00:23:15.201 "trsvcid": "4420", 00:23:15.201 "trtype": "TCP" 00:23:15.201 }, 00:23:15.201 "vs": { 00:23:15.201 "nvme_version": "1.3" 00:23:15.201 } 00:23:15.201 } 00:23:15.201 ] 00:23:15.201 }, 00:23:15.201 "memory_domains": [ 00:23:15.201 { 00:23:15.201 "dma_device_id": "system", 00:23:15.201 "dma_device_type": 1 00:23:15.201 } 00:23:15.201 ], 00:23:15.201 "name": "nvme0n1", 00:23:15.201 "num_blocks": 2097152, 00:23:15.201 "product_name": "NVMe disk", 00:23:15.201 "supported_io_types": { 00:23:15.201 "abort": true, 00:23:15.201 "compare": true, 00:23:15.201 "compare_and_write": true, 00:23:15.201 "flush": true, 00:23:15.201 "nvme_admin": true, 00:23:15.201 "nvme_io": true, 00:23:15.201 "read": true, 00:23:15.201 "reset": true, 00:23:15.201 "unmap": false, 00:23:15.201 "write": true, 00:23:15.201 "write_zeroes": true 00:23:15.201 }, 00:23:15.201 "uuid": "cadbcf72-49c7-4f74-a40b-9f80b90d8c33", 00:23:15.201 "zoned": false 00:23:15.201 } 00:23:15.201 ] 00:23:15.201 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.201 13:22:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:15.201 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.201 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.201 [2024-07-15 13:22:11.748721] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:15.201 [2024-07-15 13:22:11.748852] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2580340 (9): Bad file descriptor 00:23:15.201 [2024-07-15 13:22:11.891421] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:15.201 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.201 13:22:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:15.201 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.201 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.201 [ 00:23:15.201 { 00:23:15.201 "aliases": [ 00:23:15.201 "cadbcf72-49c7-4f74-a40b-9f80b90d8c33" 00:23:15.201 ], 00:23:15.201 "assigned_rate_limits": { 00:23:15.201 "r_mbytes_per_sec": 0, 00:23:15.201 "rw_ios_per_sec": 0, 00:23:15.201 "rw_mbytes_per_sec": 0, 00:23:15.201 "w_mbytes_per_sec": 0 00:23:15.201 }, 00:23:15.201 "block_size": 512, 00:23:15.201 "claimed": false, 00:23:15.201 "driver_specific": { 00:23:15.202 "mp_policy": "active_passive", 00:23:15.202 "nvme": [ 00:23:15.202 { 00:23:15.202 "ctrlr_data": { 00:23:15.202 "ana_reporting": false, 00:23:15.202 "cntlid": 2, 00:23:15.202 "firmware_revision": "24.05.1", 00:23:15.202 "model_number": "SPDK bdev Controller", 00:23:15.202 "multi_ctrlr": true, 00:23:15.202 "oacs": { 00:23:15.202 "firmware": 0, 00:23:15.202 "format": 0, 00:23:15.202 "ns_manage": 0, 00:23:15.202 "security": 0 00:23:15.202 }, 00:23:15.202 "serial_number": "00000000000000000000", 00:23:15.202 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:15.202 "vendor_id": "0x8086" 00:23:15.202 }, 00:23:15.202 "ns_data": { 00:23:15.202 "can_share": true, 00:23:15.202 "id": 1 00:23:15.202 }, 00:23:15.202 "trid": { 00:23:15.202 "adrfam": "IPv4", 00:23:15.202 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:15.202 "traddr": "10.0.0.2", 00:23:15.202 "trsvcid": "4420", 00:23:15.202 "trtype": "TCP" 00:23:15.202 }, 00:23:15.202 "vs": { 00:23:15.202 "nvme_version": "1.3" 00:23:15.202 } 00:23:15.202 } 00:23:15.202 ] 00:23:15.202 }, 00:23:15.202 "memory_domains": [ 00:23:15.202 { 00:23:15.202 "dma_device_id": "system", 00:23:15.202 "dma_device_type": 1 00:23:15.202 } 00:23:15.202 ], 00:23:15.202 "name": "nvme0n1", 00:23:15.202 "num_blocks": 2097152, 00:23:15.202 "product_name": "NVMe disk", 00:23:15.202 "supported_io_types": { 00:23:15.202 "abort": true, 00:23:15.202 "compare": true, 00:23:15.202 "compare_and_write": true, 00:23:15.202 "flush": true, 00:23:15.202 "nvme_admin": true, 00:23:15.202 "nvme_io": true, 00:23:15.202 "read": true, 00:23:15.202 "reset": true, 00:23:15.202 "unmap": false, 00:23:15.202 "write": true, 00:23:15.202 "write_zeroes": true 00:23:15.202 }, 00:23:15.202 "uuid": "cadbcf72-49c7-4f74-a40b-9f80b90d8c33", 00:23:15.202 "zoned": false 00:23:15.202 } 00:23:15.202 ] 00:23:15.202 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.202 13:22:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.202 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.202 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.202 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.202 13:22:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:15.202 13:22:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.UXbIbzTe5A 00:23:15.202 13:22:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:15.202 13:22:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.UXbIbzTe5A 00:23:15.202 13:22:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:15.202 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.202 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.461 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.461 13:22:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:15.461 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.461 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.461 [2024-07-15 13:22:11.948927] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:15.461 [2024-07-15 13:22:11.949115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:15.461 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.461 13:22:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UXbIbzTe5A 00:23:15.461 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.461 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.461 [2024-07-15 13:22:11.956902] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:15.461 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.461 13:22:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UXbIbzTe5A 00:23:15.461 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.461 13:22:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.461 [2024-07-15 13:22:11.964890] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.461 [2024-07-15 13:22:11.964966] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:15.461 nvme0n1 00:23:15.461 13:22:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.461 13:22:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:15.461 13:22:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.461 13:22:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.461 [ 00:23:15.461 { 00:23:15.461 "aliases": [ 00:23:15.461 "cadbcf72-49c7-4f74-a40b-9f80b90d8c33" 00:23:15.461 ], 00:23:15.461 "assigned_rate_limits": { 00:23:15.461 "r_mbytes_per_sec": 0, 00:23:15.461 "rw_ios_per_sec": 0, 00:23:15.461 "rw_mbytes_per_sec": 0, 00:23:15.461 "w_mbytes_per_sec": 0 00:23:15.461 }, 00:23:15.461 "block_size": 512, 00:23:15.461 "claimed": false, 00:23:15.461 "driver_specific": { 00:23:15.461 "mp_policy": "active_passive", 00:23:15.461 "nvme": [ 00:23:15.461 { 00:23:15.461 "ctrlr_data": { 00:23:15.461 "ana_reporting": false, 00:23:15.461 "cntlid": 3, 00:23:15.461 "firmware_revision": "24.05.1", 00:23:15.461 "model_number": "SPDK bdev Controller", 00:23:15.461 "multi_ctrlr": true, 00:23:15.461 "oacs": { 00:23:15.461 "firmware": 0, 00:23:15.461 "format": 0, 00:23:15.461 "ns_manage": 0, 00:23:15.461 "security": 0 00:23:15.461 }, 00:23:15.461 "serial_number": "00000000000000000000", 00:23:15.461 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:15.461 "vendor_id": "0x8086" 00:23:15.461 }, 00:23:15.461 "ns_data": { 00:23:15.461 "can_share": true, 00:23:15.461 "id": 1 00:23:15.461 }, 00:23:15.461 "trid": { 00:23:15.461 "adrfam": "IPv4", 00:23:15.461 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:15.461 "traddr": "10.0.0.2", 00:23:15.461 "trsvcid": "4421", 00:23:15.461 "trtype": "TCP" 00:23:15.461 }, 00:23:15.461 "vs": { 00:23:15.461 "nvme_version": "1.3" 00:23:15.461 } 00:23:15.461 } 00:23:15.461 ] 00:23:15.461 }, 00:23:15.461 "memory_domains": [ 00:23:15.461 { 00:23:15.461 "dma_device_id": "system", 00:23:15.461 "dma_device_type": 1 00:23:15.461 } 00:23:15.461 ], 00:23:15.461 "name": "nvme0n1", 00:23:15.461 "num_blocks": 2097152, 00:23:15.461 "product_name": "NVMe disk", 00:23:15.461 "supported_io_types": { 00:23:15.461 "abort": true, 00:23:15.461 "compare": true, 00:23:15.461 "compare_and_write": true, 00:23:15.461 "flush": true, 00:23:15.461 "nvme_admin": true, 00:23:15.461 "nvme_io": true, 00:23:15.461 "read": true, 00:23:15.461 "reset": true, 00:23:15.461 "unmap": false, 00:23:15.461 "write": true, 00:23:15.461 "write_zeroes": true 00:23:15.461 }, 00:23:15.461 "uuid": "cadbcf72-49c7-4f74-a40b-9f80b90d8c33", 00:23:15.461 "zoned": false 00:23:15.461 } 00:23:15.461 ] 00:23:15.461 13:22:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.461 13:22:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.461 13:22:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.461 13:22:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.461 13:22:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.461 13:22:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.UXbIbzTe5A 00:23:15.461 13:22:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:15.461 13:22:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:15.461 13:22:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:15.461 13:22:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:15.461 13:22:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:15.461 13:22:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:15.461 13:22:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:15.461 13:22:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:15.461 rmmod nvme_tcp 00:23:15.461 rmmod nvme_fabrics 00:23:15.461 rmmod nvme_keyring 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 104161 ']' 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 104161 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 104161 ']' 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 104161 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 104161 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:15.720 killing process with pid 104161 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 104161' 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 104161 00:23:15.720 [2024-07-15 13:22:12.230272] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:15.720 [2024-07-15 13:22:12.230312] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 104161 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:15.720 13:22:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.978 13:22:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:15.978 00:23:15.978 real 0m2.526s 00:23:15.978 user 0m2.394s 00:23:15.978 sys 0m0.570s 00:23:15.978 13:22:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:15.978 13:22:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.978 ************************************ 00:23:15.978 END TEST nvmf_async_init 00:23:15.978 ************************************ 00:23:15.978 13:22:12 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:15.978 13:22:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:15.978 13:22:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:15.978 13:22:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:15.978 ************************************ 00:23:15.978 START TEST dma 00:23:15.978 ************************************ 00:23:15.979 13:22:12 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:15.979 * Looking for test storage... 00:23:15.979 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:15.979 13:22:12 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:15.979 13:22:12 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.979 13:22:12 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.979 13:22:12 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.979 13:22:12 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.979 13:22:12 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.979 13:22:12 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.979 13:22:12 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:23:15.979 13:22:12 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:15.979 13:22:12 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:15.979 13:22:12 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:15.979 13:22:12 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:23:15.979 00:23:15.979 real 0m0.109s 00:23:15.979 user 0m0.053s 00:23:15.979 sys 0m0.061s 00:23:15.979 13:22:12 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:15.979 ************************************ 00:23:15.979 END TEST dma 00:23:15.979 13:22:12 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:23:15.979 ************************************ 00:23:15.979 13:22:12 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:15.979 13:22:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:15.979 13:22:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:15.979 13:22:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:15.979 ************************************ 00:23:15.979 START TEST nvmf_identify 00:23:15.979 ************************************ 00:23:15.979 13:22:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:16.238 * Looking for test storage... 00:23:16.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.238 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:16.239 Cannot find device "nvmf_tgt_br" 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:16.239 Cannot find device "nvmf_tgt_br2" 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:16.239 Cannot find device "nvmf_tgt_br" 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:16.239 Cannot find device "nvmf_tgt_br2" 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:16.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:16.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:16.239 13:22:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:16.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:23:16.498 00:23:16.498 --- 10.0.0.2 ping statistics --- 00:23:16.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.498 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:16.498 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:16.498 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:23:16.498 00:23:16.498 --- 10.0.0.3 ping statistics --- 00:23:16.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.498 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:16.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:23:16.498 00:23:16.498 --- 10.0.0.1 ping statistics --- 00:23:16.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.498 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=104420 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 104420 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 104420 ']' 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:16.498 13:22:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:16.498 [2024-07-15 13:22:13.228432] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:16.498 [2024-07-15 13:22:13.228524] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.756 [2024-07-15 13:22:13.367186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:16.756 [2024-07-15 13:22:13.473795] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.756 [2024-07-15 13:22:13.473851] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.756 [2024-07-15 13:22:13.473865] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.756 [2024-07-15 13:22:13.473875] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.756 [2024-07-15 13:22:13.473885] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.756 [2024-07-15 13:22:13.474017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.756 [2024-07-15 13:22:13.474164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.756 [2024-07-15 13:22:13.474849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:16.756 [2024-07-15 13:22:13.474905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.689 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:17.689 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:17.690 [2024-07-15 13:22:14.262612] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:17.690 Malloc0 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:17.690 [2024-07-15 13:22:14.365693] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:17.690 [ 00:23:17.690 { 00:23:17.690 "allow_any_host": true, 00:23:17.690 "hosts": [], 00:23:17.690 "listen_addresses": [ 00:23:17.690 { 00:23:17.690 "adrfam": "IPv4", 00:23:17.690 "traddr": "10.0.0.2", 00:23:17.690 "trsvcid": "4420", 00:23:17.690 "trtype": "TCP" 00:23:17.690 } 00:23:17.690 ], 00:23:17.690 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:17.690 "subtype": "Discovery" 00:23:17.690 }, 00:23:17.690 { 00:23:17.690 "allow_any_host": true, 00:23:17.690 "hosts": [], 00:23:17.690 "listen_addresses": [ 00:23:17.690 { 00:23:17.690 "adrfam": "IPv4", 00:23:17.690 "traddr": "10.0.0.2", 00:23:17.690 "trsvcid": "4420", 00:23:17.690 "trtype": "TCP" 00:23:17.690 } 00:23:17.690 ], 00:23:17.690 "max_cntlid": 65519, 00:23:17.690 "max_namespaces": 32, 00:23:17.690 "min_cntlid": 1, 00:23:17.690 "model_number": "SPDK bdev Controller", 00:23:17.690 "namespaces": [ 00:23:17.690 { 00:23:17.690 "bdev_name": "Malloc0", 00:23:17.690 "eui64": "ABCDEF0123456789", 00:23:17.690 "name": "Malloc0", 00:23:17.690 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:17.690 "nsid": 1, 00:23:17.690 "uuid": "0a3668a2-9e5a-4ed1-9abf-b28d5b90cbd2" 00:23:17.690 } 00:23:17.690 ], 00:23:17.690 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.690 "serial_number": "SPDK00000000000001", 00:23:17.690 "subtype": "NVMe" 00:23:17.690 } 00:23:17.690 ] 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.690 13:22:14 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:17.950 [2024-07-15 13:22:14.435675] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:17.950 [2024-07-15 13:22:14.435940] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104479 ] 00:23:17.950 [2024-07-15 13:22:14.576497] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:17.950 [2024-07-15 13:22:14.576584] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:17.950 [2024-07-15 13:22:14.576592] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:17.950 [2024-07-15 13:22:14.576613] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:17.950 [2024-07-15 13:22:14.576625] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:17.950 [2024-07-15 13:22:14.576803] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:17.950 [2024-07-15 13:22:14.576855] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x10b6970 0 00:23:17.950 [2024-07-15 13:22:14.589234] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:17.950 [2024-07-15 13:22:14.589269] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:17.950 [2024-07-15 13:22:14.589276] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:17.950 [2024-07-15 13:22:14.589280] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:17.950 [2024-07-15 13:22:14.589333] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:17.950 [2024-07-15 13:22:14.589341] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:17.950 [2024-07-15 13:22:14.589346] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b6970) 00:23:17.950 [2024-07-15 13:22:14.589364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:17.950 [2024-07-15 13:22:14.589403] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef1d0, cid 0, qid 0 00:23:17.950 [2024-07-15 13:22:14.597231] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:17.950 [2024-07-15 13:22:14.597259] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:17.950 [2024-07-15 13:22:14.597265] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:17.950 [2024-07-15 13:22:14.597271] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef1d0) on tqpair=0x10b6970 00:23:17.950 [2024-07-15 13:22:14.597285] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:17.950 [2024-07-15 13:22:14.597296] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:17.950 [2024-07-15 13:22:14.597303] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:17.950 [2024-07-15 13:22:14.597324] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:17.950 [2024-07-15 13:22:14.597330] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:17.950 [2024-07-15 13:22:14.597335] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b6970) 00:23:17.950 [2024-07-15 13:22:14.597348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.950 [2024-07-15 13:22:14.597381] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef1d0, cid 0, qid 0 00:23:17.950 [2024-07-15 13:22:14.597455] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:17.950 [2024-07-15 13:22:14.597463] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:17.950 [2024-07-15 13:22:14.597467] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:17.950 [2024-07-15 13:22:14.597471] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef1d0) on tqpair=0x10b6970 00:23:17.950 [2024-07-15 13:22:14.597478] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:17.950 [2024-07-15 13:22:14.597486] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:17.950 [2024-07-15 13:22:14.597495] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:17.950 [2024-07-15 13:22:14.597499] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:17.950 [2024-07-15 13:22:14.597503] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b6970) 00:23:17.950 [2024-07-15 13:22:14.597511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.950 [2024-07-15 13:22:14.597532] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef1d0, cid 0, qid 0 00:23:17.950 [2024-07-15 13:22:14.597588] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:17.950 [2024-07-15 13:22:14.597595] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:17.950 [2024-07-15 13:22:14.597599] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:17.950 [2024-07-15 13:22:14.597603] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef1d0) on tqpair=0x10b6970 00:23:17.950 [2024-07-15 13:22:14.597610] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:17.950 [2024-07-15 13:22:14.597619] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:17.950 [2024-07-15 13:22:14.597627] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:17.950 [2024-07-15 13:22:14.597632] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:17.950 [2024-07-15 13:22:14.597636] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b6970) 00:23:17.950 [2024-07-15 13:22:14.597643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.950 [2024-07-15 13:22:14.597663] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef1d0, cid 0, qid 0 00:23:17.950 [2024-07-15 13:22:14.597718] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:17.950 [2024-07-15 13:22:14.597725] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:17.950 [2024-07-15 13:22:14.597729] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:17.950 [2024-07-15 13:22:14.597734] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef1d0) on tqpair=0x10b6970 00:23:17.950 [2024-07-15 13:22:14.597741] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:17.950 [2024-07-15 13:22:14.597752] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:17.950 [2024-07-15 13:22:14.597757] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:17.950 [2024-07-15 13:22:14.597761] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b6970) 00:23:17.950 [2024-07-15 13:22:14.597768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.950 [2024-07-15 13:22:14.597787] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef1d0, cid 0, qid 0 00:23:17.950 [2024-07-15 13:22:14.597843] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:17.950 [2024-07-15 13:22:14.597850] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:17.950 [2024-07-15 13:22:14.597854] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:17.950 [2024-07-15 13:22:14.597858] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef1d0) on tqpair=0x10b6970 00:23:17.950 [2024-07-15 13:22:14.597864] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:17.950 [2024-07-15 13:22:14.597870] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:17.950 [2024-07-15 13:22:14.597878] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:17.950 [2024-07-15 13:22:14.597984] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:17.951 [2024-07-15 13:22:14.597999] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:17.951 [2024-07-15 13:22:14.598010] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598015] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598019] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b6970) 00:23:17.951 [2024-07-15 13:22:14.598026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.951 [2024-07-15 13:22:14.598047] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef1d0, cid 0, qid 0 00:23:17.951 [2024-07-15 13:22:14.598109] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:17.951 [2024-07-15 13:22:14.598129] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:17.951 [2024-07-15 13:22:14.598134] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598138] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef1d0) on tqpair=0x10b6970 00:23:17.951 [2024-07-15 13:22:14.598145] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:17.951 [2024-07-15 13:22:14.598156] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598161] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598165] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b6970) 00:23:17.951 [2024-07-15 13:22:14.598173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.951 [2024-07-15 13:22:14.598192] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef1d0, cid 0, qid 0 00:23:17.951 [2024-07-15 13:22:14.598260] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:17.951 [2024-07-15 13:22:14.598269] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:17.951 [2024-07-15 13:22:14.598273] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598278] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef1d0) on tqpair=0x10b6970 00:23:17.951 [2024-07-15 13:22:14.598284] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:17.951 [2024-07-15 13:22:14.598289] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:17.951 [2024-07-15 13:22:14.598298] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:17.951 [2024-07-15 13:22:14.598309] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:17.951 [2024-07-15 13:22:14.598320] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598325] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b6970) 00:23:17.951 [2024-07-15 13:22:14.598333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.951 [2024-07-15 13:22:14.598356] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef1d0, cid 0, qid 0 00:23:17.951 [2024-07-15 13:22:14.598470] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:17.951 [2024-07-15 13:22:14.598482] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:17.951 [2024-07-15 13:22:14.598486] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598490] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b6970): datao=0, datal=4096, cccid=0 00:23:17.951 [2024-07-15 13:22:14.598495] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ef1d0) on tqpair(0x10b6970): expected_datao=0, payload_size=4096 00:23:17.951 [2024-07-15 13:22:14.598501] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598511] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598516] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598524] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:17.951 [2024-07-15 13:22:14.598531] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:17.951 [2024-07-15 13:22:14.598535] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598539] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef1d0) on tqpair=0x10b6970 00:23:17.951 [2024-07-15 13:22:14.598550] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:17.951 [2024-07-15 13:22:14.598556] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:17.951 [2024-07-15 13:22:14.598561] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:17.951 [2024-07-15 13:22:14.598566] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:17.951 [2024-07-15 13:22:14.598572] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:17.951 [2024-07-15 13:22:14.598577] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:17.951 [2024-07-15 13:22:14.598591] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:17.951 [2024-07-15 13:22:14.598601] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598606] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598610] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b6970) 00:23:17.951 [2024-07-15 13:22:14.598618] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:17.951 [2024-07-15 13:22:14.598640] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef1d0, cid 0, qid 0 00:23:17.951 [2024-07-15 13:22:14.598720] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:17.951 [2024-07-15 13:22:14.598729] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:17.951 [2024-07-15 13:22:14.598732] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598737] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef1d0) on tqpair=0x10b6970 00:23:17.951 [2024-07-15 13:22:14.598747] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598751] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598755] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b6970) 00:23:17.951 [2024-07-15 13:22:14.598762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.951 [2024-07-15 13:22:14.598769] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598773] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598777] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x10b6970) 00:23:17.951 [2024-07-15 13:22:14.598783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.951 [2024-07-15 13:22:14.598790] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598794] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598798] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x10b6970) 00:23:17.951 [2024-07-15 13:22:14.598804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.951 [2024-07-15 13:22:14.598811] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598815] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598818] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6970) 00:23:17.951 [2024-07-15 13:22:14.598825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.951 [2024-07-15 13:22:14.598830] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:17.951 [2024-07-15 13:22:14.598839] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:17.951 [2024-07-15 13:22:14.598847] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.598851] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b6970) 00:23:17.951 [2024-07-15 13:22:14.598858] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.951 [2024-07-15 13:22:14.598886] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef1d0, cid 0, qid 0 00:23:17.951 [2024-07-15 13:22:14.598894] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef330, cid 1, qid 0 00:23:17.951 [2024-07-15 13:22:14.598899] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef490, cid 2, qid 0 00:23:17.951 [2024-07-15 13:22:14.598904] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef5f0, cid 3, qid 0 00:23:17.951 [2024-07-15 13:22:14.598909] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef750, cid 4, qid 0 00:23:17.951 [2024-07-15 13:22:14.599003] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:17.951 [2024-07-15 13:22:14.599010] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:17.951 [2024-07-15 13:22:14.599013] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.599018] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef750) on tqpair=0x10b6970 00:23:17.951 [2024-07-15 13:22:14.599025] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:17.951 [2024-07-15 13:22:14.599031] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:17.951 [2024-07-15 13:22:14.599043] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.599047] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b6970) 00:23:17.951 [2024-07-15 13:22:14.599055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.951 [2024-07-15 13:22:14.599074] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef750, cid 4, qid 0 00:23:17.951 [2024-07-15 13:22:14.599144] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:17.951 [2024-07-15 13:22:14.599151] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:17.951 [2024-07-15 13:22:14.599155] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.599159] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b6970): datao=0, datal=4096, cccid=4 00:23:17.951 [2024-07-15 13:22:14.599164] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ef750) on tqpair(0x10b6970): expected_datao=0, payload_size=4096 00:23:17.951 [2024-07-15 13:22:14.599169] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.599177] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.599182] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.599190] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:17.951 [2024-07-15 13:22:14.599197] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:17.951 [2024-07-15 13:22:14.599200] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:17.951 [2024-07-15 13:22:14.599217] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef750) on tqpair=0x10b6970 00:23:17.952 [2024-07-15 13:22:14.599236] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:17.952 [2024-07-15 13:22:14.599268] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:17.952 [2024-07-15 13:22:14.599274] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b6970) 00:23:17.952 [2024-07-15 13:22:14.599282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.952 [2024-07-15 13:22:14.599291] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:17.952 [2024-07-15 13:22:14.599295] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:17.952 [2024-07-15 13:22:14.599299] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10b6970) 00:23:17.952 [2024-07-15 13:22:14.599305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.952 [2024-07-15 13:22:14.599334] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef750, cid 4, qid 0 00:23:17.952 [2024-07-15 13:22:14.599343] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef8b0, cid 5, qid 0 00:23:17.952 [2024-07-15 13:22:14.599448] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:17.952 [2024-07-15 13:22:14.599455] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:17.952 [2024-07-15 13:22:14.599459] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:17.952 [2024-07-15 13:22:14.599463] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b6970): datao=0, datal=1024, cccid=4 00:23:17.952 [2024-07-15 13:22:14.599468] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ef750) on tqpair(0x10b6970): expected_datao=0, payload_size=1024 00:23:17.952 [2024-07-15 13:22:14.599473] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:17.952 [2024-07-15 13:22:14.599480] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:17.952 [2024-07-15 13:22:14.599484] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:17.952 [2024-07-15 13:22:14.599490] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:17.952 [2024-07-15 13:22:14.599496] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:17.952 [2024-07-15 13:22:14.599500] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:17.952 [2024-07-15 13:22:14.599504] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef8b0) on tqpair=0x10b6970 00:23:17.952 [2024-07-15 13:22:14.645234] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:17.952 [2024-07-15 13:22:14.645281] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:17.952 [2024-07-15 13:22:14.645287] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:17.952 [2024-07-15 13:22:14.645294] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef750) on tqpair=0x10b6970 00:23:17.952 [2024-07-15 13:22:14.645326] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:17.952 [2024-07-15 13:22:14.645332] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b6970) 00:23:17.952 [2024-07-15 13:22:14.645347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.952 [2024-07-15 13:22:14.645391] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef750, cid 4, qid 0 00:23:17.952 [2024-07-15 13:22:14.645515] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:17.952 [2024-07-15 13:22:14.645523] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:17.952 [2024-07-15 13:22:14.645527] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:17.952 [2024-07-15 13:22:14.645531] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b6970): datao=0, datal=3072, cccid=4 00:23:17.952 [2024-07-15 13:22:14.645536] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ef750) on tqpair(0x10b6970): expected_datao=0, payload_size=3072 00:23:17.952 [2024-07-15 13:22:14.645542] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:17.952 [2024-07-15 13:22:14.645551] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:17.952 [2024-07-15 13:22:14.645556] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:17.952 [2024-07-15 13:22:14.645565] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:17.952 [2024-07-15 13:22:14.645572] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:17.952 [2024-07-15 13:22:14.645576] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:17.952 [2024-07-15 13:22:14.645580] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef750) on tqpair=0x10b6970 00:23:17.952 [2024-07-15 13:22:14.645592] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:17.952 [2024-07-15 13:22:14.645597] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b6970) 00:23:17.952 [2024-07-15 13:22:14.645604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.952 [2024-07-15 13:22:14.645631] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef750, cid 4, qid 0 00:23:17.952 [2024-07-15 13:22:14.645707] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:17.952 [2024-07-15 13:22:14.645714] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:17.952 [2024-07-15 13:22:14.645718] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:17.952 [2024-07-15 13:22:14.645722] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b6970): datao=0, datal=8, cccid=4 00:23:17.952 [2024-07-15 13:22:14.645727] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ef750) on tqpair(0x10b6970): expected_datao=0, payload_size=8 00:23:17.952 [2024-07-15 13:22:14.645731] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:17.952 [2024-07-15 13:22:14.645739] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:17.952 [2024-07-15 13:22:14.645743] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:17.952 [2024-07-15 13:22:14.687340] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:17.952 [2024-07-15 13:22:14.687389] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:17.952 [2024-07-15 13:22:14.687396] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:17.952 [2024-07-15 13:22:14.687403] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef750) on tqpair=0x10b6970 00:23:18.216 ===================================================== 00:23:18.216 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:18.216 ===================================================== 00:23:18.216 Controller Capabilities/Features 00:23:18.216 ================================ 00:23:18.216 Vendor ID: 0000 00:23:18.216 Subsystem Vendor ID: 0000 00:23:18.216 Serial Number: .................... 00:23:18.216 Model Number: ........................................ 00:23:18.216 Firmware Version: 24.05.1 00:23:18.216 Recommended Arb Burst: 0 00:23:18.216 IEEE OUI Identifier: 00 00 00 00:23:18.216 Multi-path I/O 00:23:18.216 May have multiple subsystem ports: No 00:23:18.216 May have multiple controllers: No 00:23:18.216 Associated with SR-IOV VF: No 00:23:18.216 Max Data Transfer Size: 131072 00:23:18.216 Max Number of Namespaces: 0 00:23:18.216 Max Number of I/O Queues: 1024 00:23:18.216 NVMe Specification Version (VS): 1.3 00:23:18.216 NVMe Specification Version (Identify): 1.3 00:23:18.216 Maximum Queue Entries: 128 00:23:18.216 Contiguous Queues Required: Yes 00:23:18.216 Arbitration Mechanisms Supported 00:23:18.216 Weighted Round Robin: Not Supported 00:23:18.216 Vendor Specific: Not Supported 00:23:18.216 Reset Timeout: 15000 ms 00:23:18.216 Doorbell Stride: 4 bytes 00:23:18.216 NVM Subsystem Reset: Not Supported 00:23:18.216 Command Sets Supported 00:23:18.216 NVM Command Set: Supported 00:23:18.216 Boot Partition: Not Supported 00:23:18.216 Memory Page Size Minimum: 4096 bytes 00:23:18.216 Memory Page Size Maximum: 4096 bytes 00:23:18.217 Persistent Memory Region: Not Supported 00:23:18.217 Optional Asynchronous Events Supported 00:23:18.217 Namespace Attribute Notices: Not Supported 00:23:18.217 Firmware Activation Notices: Not Supported 00:23:18.217 ANA Change Notices: Not Supported 00:23:18.217 PLE Aggregate Log Change Notices: Not Supported 00:23:18.217 LBA Status Info Alert Notices: Not Supported 00:23:18.217 EGE Aggregate Log Change Notices: Not Supported 00:23:18.217 Normal NVM Subsystem Shutdown event: Not Supported 00:23:18.217 Zone Descriptor Change Notices: Not Supported 00:23:18.217 Discovery Log Change Notices: Supported 00:23:18.217 Controller Attributes 00:23:18.217 128-bit Host Identifier: Not Supported 00:23:18.217 Non-Operational Permissive Mode: Not Supported 00:23:18.217 NVM Sets: Not Supported 00:23:18.217 Read Recovery Levels: Not Supported 00:23:18.217 Endurance Groups: Not Supported 00:23:18.217 Predictable Latency Mode: Not Supported 00:23:18.217 Traffic Based Keep ALive: Not Supported 00:23:18.217 Namespace Granularity: Not Supported 00:23:18.217 SQ Associations: Not Supported 00:23:18.217 UUID List: Not Supported 00:23:18.217 Multi-Domain Subsystem: Not Supported 00:23:18.217 Fixed Capacity Management: Not Supported 00:23:18.217 Variable Capacity Management: Not Supported 00:23:18.217 Delete Endurance Group: Not Supported 00:23:18.217 Delete NVM Set: Not Supported 00:23:18.217 Extended LBA Formats Supported: Not Supported 00:23:18.217 Flexible Data Placement Supported: Not Supported 00:23:18.217 00:23:18.217 Controller Memory Buffer Support 00:23:18.217 ================================ 00:23:18.217 Supported: No 00:23:18.217 00:23:18.217 Persistent Memory Region Support 00:23:18.217 ================================ 00:23:18.217 Supported: No 00:23:18.217 00:23:18.217 Admin Command Set Attributes 00:23:18.217 ============================ 00:23:18.217 Security Send/Receive: Not Supported 00:23:18.217 Format NVM: Not Supported 00:23:18.217 Firmware Activate/Download: Not Supported 00:23:18.217 Namespace Management: Not Supported 00:23:18.217 Device Self-Test: Not Supported 00:23:18.217 Directives: Not Supported 00:23:18.217 NVMe-MI: Not Supported 00:23:18.217 Virtualization Management: Not Supported 00:23:18.217 Doorbell Buffer Config: Not Supported 00:23:18.217 Get LBA Status Capability: Not Supported 00:23:18.217 Command & Feature Lockdown Capability: Not Supported 00:23:18.217 Abort Command Limit: 1 00:23:18.217 Async Event Request Limit: 4 00:23:18.217 Number of Firmware Slots: N/A 00:23:18.217 Firmware Slot 1 Read-Only: N/A 00:23:18.217 Firmware Activation Without Reset: N/A 00:23:18.217 Multiple Update Detection Support: N/A 00:23:18.217 Firmware Update Granularity: No Information Provided 00:23:18.217 Per-Namespace SMART Log: No 00:23:18.217 Asymmetric Namespace Access Log Page: Not Supported 00:23:18.217 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:18.217 Command Effects Log Page: Not Supported 00:23:18.217 Get Log Page Extended Data: Supported 00:23:18.217 Telemetry Log Pages: Not Supported 00:23:18.217 Persistent Event Log Pages: Not Supported 00:23:18.217 Supported Log Pages Log Page: May Support 00:23:18.217 Commands Supported & Effects Log Page: Not Supported 00:23:18.217 Feature Identifiers & Effects Log Page:May Support 00:23:18.217 NVMe-MI Commands & Effects Log Page: May Support 00:23:18.217 Data Area 4 for Telemetry Log: Not Supported 00:23:18.217 Error Log Page Entries Supported: 128 00:23:18.217 Keep Alive: Not Supported 00:23:18.217 00:23:18.217 NVM Command Set Attributes 00:23:18.217 ========================== 00:23:18.217 Submission Queue Entry Size 00:23:18.217 Max: 1 00:23:18.217 Min: 1 00:23:18.217 Completion Queue Entry Size 00:23:18.217 Max: 1 00:23:18.217 Min: 1 00:23:18.217 Number of Namespaces: 0 00:23:18.217 Compare Command: Not Supported 00:23:18.217 Write Uncorrectable Command: Not Supported 00:23:18.217 Dataset Management Command: Not Supported 00:23:18.217 Write Zeroes Command: Not Supported 00:23:18.217 Set Features Save Field: Not Supported 00:23:18.217 Reservations: Not Supported 00:23:18.217 Timestamp: Not Supported 00:23:18.217 Copy: Not Supported 00:23:18.217 Volatile Write Cache: Not Present 00:23:18.217 Atomic Write Unit (Normal): 1 00:23:18.217 Atomic Write Unit (PFail): 1 00:23:18.217 Atomic Compare & Write Unit: 1 00:23:18.217 Fused Compare & Write: Supported 00:23:18.217 Scatter-Gather List 00:23:18.217 SGL Command Set: Supported 00:23:18.217 SGL Keyed: Supported 00:23:18.217 SGL Bit Bucket Descriptor: Not Supported 00:23:18.217 SGL Metadata Pointer: Not Supported 00:23:18.217 Oversized SGL: Not Supported 00:23:18.217 SGL Metadata Address: Not Supported 00:23:18.217 SGL Offset: Supported 00:23:18.217 Transport SGL Data Block: Not Supported 00:23:18.217 Replay Protected Memory Block: Not Supported 00:23:18.217 00:23:18.217 Firmware Slot Information 00:23:18.217 ========================= 00:23:18.217 Active slot: 0 00:23:18.217 00:23:18.217 00:23:18.217 Error Log 00:23:18.217 ========= 00:23:18.217 00:23:18.217 Active Namespaces 00:23:18.217 ================= 00:23:18.217 Discovery Log Page 00:23:18.217 ================== 00:23:18.217 Generation Counter: 2 00:23:18.217 Number of Records: 2 00:23:18.217 Record Format: 0 00:23:18.217 00:23:18.217 Discovery Log Entry 0 00:23:18.217 ---------------------- 00:23:18.217 Transport Type: 3 (TCP) 00:23:18.217 Address Family: 1 (IPv4) 00:23:18.217 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:18.217 Entry Flags: 00:23:18.217 Duplicate Returned Information: 1 00:23:18.217 Explicit Persistent Connection Support for Discovery: 1 00:23:18.217 Transport Requirements: 00:23:18.217 Secure Channel: Not Required 00:23:18.217 Port ID: 0 (0x0000) 00:23:18.217 Controller ID: 65535 (0xffff) 00:23:18.217 Admin Max SQ Size: 128 00:23:18.217 Transport Service Identifier: 4420 00:23:18.217 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:18.217 Transport Address: 10.0.0.2 00:23:18.217 Discovery Log Entry 1 00:23:18.217 ---------------------- 00:23:18.217 Transport Type: 3 (TCP) 00:23:18.217 Address Family: 1 (IPv4) 00:23:18.217 Subsystem Type: 2 (NVM Subsystem) 00:23:18.217 Entry Flags: 00:23:18.217 Duplicate Returned Information: 0 00:23:18.217 Explicit Persistent Connection Support for Discovery: 0 00:23:18.217 Transport Requirements: 00:23:18.217 Secure Channel: Not Required 00:23:18.217 Port ID: 0 (0x0000) 00:23:18.217 Controller ID: 65535 (0xffff) 00:23:18.217 Admin Max SQ Size: 128 00:23:18.218 Transport Service Identifier: 4420 00:23:18.218 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:18.218 Transport Address: 10.0.0.2 [2024-07-15 13:22:14.687599] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:18.218 [2024-07-15 13:22:14.687624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.218 [2024-07-15 13:22:14.687633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.218 [2024-07-15 13:22:14.687640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.218 [2024-07-15 13:22:14.687647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.218 [2024-07-15 13:22:14.687662] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.687667] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.687671] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6970) 00:23:18.218 [2024-07-15 13:22:14.687685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.218 [2024-07-15 13:22:14.687717] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef5f0, cid 3, qid 0 00:23:18.218 [2024-07-15 13:22:14.687814] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.218 [2024-07-15 13:22:14.687822] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.218 [2024-07-15 13:22:14.687827] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.687831] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef5f0) on tqpair=0x10b6970 00:23:18.218 [2024-07-15 13:22:14.687841] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.687846] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.687850] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6970) 00:23:18.218 [2024-07-15 13:22:14.687858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.218 [2024-07-15 13:22:14.687884] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef5f0, cid 3, qid 0 00:23:18.218 [2024-07-15 13:22:14.687966] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.218 [2024-07-15 13:22:14.687973] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.218 [2024-07-15 13:22:14.687977] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.687981] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef5f0) on tqpair=0x10b6970 00:23:18.218 [2024-07-15 13:22:14.687994] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:18.218 [2024-07-15 13:22:14.687999] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:18.218 [2024-07-15 13:22:14.688011] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.688016] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.688020] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6970) 00:23:18.218 [2024-07-15 13:22:14.688028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.218 [2024-07-15 13:22:14.688047] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef5f0, cid 3, qid 0 00:23:18.218 [2024-07-15 13:22:14.688104] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.218 [2024-07-15 13:22:14.688111] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.218 [2024-07-15 13:22:14.688115] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.688119] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef5f0) on tqpair=0x10b6970 00:23:18.218 [2024-07-15 13:22:14.688132] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.688137] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.688141] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6970) 00:23:18.218 [2024-07-15 13:22:14.688148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.218 [2024-07-15 13:22:14.688167] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef5f0, cid 3, qid 0 00:23:18.218 [2024-07-15 13:22:14.688237] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.218 [2024-07-15 13:22:14.688247] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.218 [2024-07-15 13:22:14.688251] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.688255] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef5f0) on tqpair=0x10b6970 00:23:18.218 [2024-07-15 13:22:14.688267] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.688272] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.688277] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6970) 00:23:18.218 [2024-07-15 13:22:14.688284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.218 [2024-07-15 13:22:14.688306] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef5f0, cid 3, qid 0 00:23:18.218 [2024-07-15 13:22:14.688364] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.218 [2024-07-15 13:22:14.688371] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.218 [2024-07-15 13:22:14.688375] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.688379] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef5f0) on tqpair=0x10b6970 00:23:18.218 [2024-07-15 13:22:14.688391] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.688395] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.688399] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6970) 00:23:18.218 [2024-07-15 13:22:14.688407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.218 [2024-07-15 13:22:14.688426] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef5f0, cid 3, qid 0 00:23:18.218 [2024-07-15 13:22:14.688480] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.218 [2024-07-15 13:22:14.688487] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.218 [2024-07-15 13:22:14.688490] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.688495] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef5f0) on tqpair=0x10b6970 00:23:18.218 [2024-07-15 13:22:14.688506] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.688511] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.688515] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6970) 00:23:18.218 [2024-07-15 13:22:14.688522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.218 [2024-07-15 13:22:14.688541] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef5f0, cid 3, qid 0 00:23:18.218 [2024-07-15 13:22:14.688597] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.218 [2024-07-15 13:22:14.688610] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.218 [2024-07-15 13:22:14.688614] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.688619] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef5f0) on tqpair=0x10b6970 00:23:18.218 [2024-07-15 13:22:14.688631] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.688636] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.688640] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6970) 00:23:18.218 [2024-07-15 13:22:14.688647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.218 [2024-07-15 13:22:14.688667] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef5f0, cid 3, qid 0 00:23:18.218 [2024-07-15 13:22:14.688721] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.218 [2024-07-15 13:22:14.688729] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.218 [2024-07-15 13:22:14.688733] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.688737] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef5f0) on tqpair=0x10b6970 00:23:18.218 [2024-07-15 13:22:14.688750] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.688754] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.218 [2024-07-15 13:22:14.688758] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6970) 00:23:18.218 [2024-07-15 13:22:14.688766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.218 [2024-07-15 13:22:14.688785] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef5f0, cid 3, qid 0 00:23:18.218 [2024-07-15 13:22:14.688839] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.218 [2024-07-15 13:22:14.688845] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.219 [2024-07-15 13:22:14.688849] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.219 [2024-07-15 13:22:14.688853] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef5f0) on tqpair=0x10b6970 00:23:18.219 [2024-07-15 13:22:14.688865] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.219 [2024-07-15 13:22:14.688870] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.219 [2024-07-15 13:22:14.688874] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6970) 00:23:18.219 [2024-07-15 13:22:14.688882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.219 [2024-07-15 13:22:14.688900] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef5f0, cid 3, qid 0 00:23:18.219 [2024-07-15 13:22:14.688951] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.219 [2024-07-15 13:22:14.688958] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.219 [2024-07-15 13:22:14.688962] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.219 [2024-07-15 13:22:14.688966] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef5f0) on tqpair=0x10b6970 00:23:18.219 [2024-07-15 13:22:14.688978] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.219 [2024-07-15 13:22:14.688983] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.219 [2024-07-15 13:22:14.688987] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6970) 00:23:18.219 [2024-07-15 13:22:14.688994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.219 [2024-07-15 13:22:14.689012] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef5f0, cid 3, qid 0 00:23:18.219 [2024-07-15 13:22:14.689072] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.219 [2024-07-15 13:22:14.689084] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.219 [2024-07-15 13:22:14.689088] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.219 [2024-07-15 13:22:14.689092] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef5f0) on tqpair=0x10b6970 00:23:18.219 [2024-07-15 13:22:14.689104] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.219 [2024-07-15 13:22:14.689109] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.219 [2024-07-15 13:22:14.689113] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6970) 00:23:18.219 [2024-07-15 13:22:14.689121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.219 [2024-07-15 13:22:14.689141] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef5f0, cid 3, qid 0 00:23:18.219 [2024-07-15 13:22:14.689193] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.219 [2024-07-15 13:22:14.689201] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.219 [2024-07-15 13:22:14.693225] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.219 [2024-07-15 13:22:14.693235] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef5f0) on tqpair=0x10b6970 00:23:18.219 [2024-07-15 13:22:14.693254] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.219 [2024-07-15 13:22:14.693260] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.219 [2024-07-15 13:22:14.693264] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b6970) 00:23:18.219 [2024-07-15 13:22:14.693273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.219 [2024-07-15 13:22:14.693300] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ef5f0, cid 3, qid 0 00:23:18.219 [2024-07-15 13:22:14.693370] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.219 [2024-07-15 13:22:14.693378] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.219 [2024-07-15 13:22:14.693382] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.219 [2024-07-15 13:22:14.693386] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ef5f0) on tqpair=0x10b6970 00:23:18.219 [2024-07-15 13:22:14.693396] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:23:18.219 00:23:18.219 13:22:14 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:18.219 [2024-07-15 13:22:14.732658] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:18.219 [2024-07-15 13:22:14.732709] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104482 ] 00:23:18.219 [2024-07-15 13:22:14.871459] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:18.219 [2024-07-15 13:22:14.871541] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:18.219 [2024-07-15 13:22:14.871549] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:18.219 [2024-07-15 13:22:14.871565] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:18.219 [2024-07-15 13:22:14.871577] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:18.219 [2024-07-15 13:22:14.871747] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:18.219 [2024-07-15 13:22:14.871798] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x193c970 0 00:23:18.219 [2024-07-15 13:22:14.884234] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:18.219 [2024-07-15 13:22:14.884274] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:18.219 [2024-07-15 13:22:14.884280] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:18.219 [2024-07-15 13:22:14.884284] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:18.219 [2024-07-15 13:22:14.884340] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.219 [2024-07-15 13:22:14.884348] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.219 [2024-07-15 13:22:14.884353] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x193c970) 00:23:18.219 [2024-07-15 13:22:14.884372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:18.219 [2024-07-15 13:22:14.884411] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19751d0, cid 0, qid 0 00:23:18.219 [2024-07-15 13:22:14.892231] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.219 [2024-07-15 13:22:14.892259] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.219 [2024-07-15 13:22:14.892265] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.219 [2024-07-15 13:22:14.892271] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19751d0) on tqpair=0x193c970 00:23:18.219 [2024-07-15 13:22:14.892290] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:18.219 [2024-07-15 13:22:14.892300] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:18.219 [2024-07-15 13:22:14.892307] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:18.219 [2024-07-15 13:22:14.892330] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.219 [2024-07-15 13:22:14.892336] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.219 [2024-07-15 13:22:14.892340] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x193c970) 00:23:18.219 [2024-07-15 13:22:14.892354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.219 [2024-07-15 13:22:14.892387] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19751d0, cid 0, qid 0 00:23:18.219 [2024-07-15 13:22:14.892473] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.219 [2024-07-15 13:22:14.892480] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.219 [2024-07-15 13:22:14.892484] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.219 [2024-07-15 13:22:14.892488] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19751d0) on tqpair=0x193c970 00:23:18.219 [2024-07-15 13:22:14.892495] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:18.219 [2024-07-15 13:22:14.892503] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:18.219 [2024-07-15 13:22:14.892511] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.219 [2024-07-15 13:22:14.892516] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.219 [2024-07-15 13:22:14.892520] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x193c970) 00:23:18.219 [2024-07-15 13:22:14.892528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.219 [2024-07-15 13:22:14.892548] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19751d0, cid 0, qid 0 00:23:18.219 [2024-07-15 13:22:14.892604] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.219 [2024-07-15 13:22:14.892611] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.220 [2024-07-15 13:22:14.892615] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.220 [2024-07-15 13:22:14.892619] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19751d0) on tqpair=0x193c970 00:23:18.220 [2024-07-15 13:22:14.892626] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:18.220 [2024-07-15 13:22:14.892635] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:18.220 [2024-07-15 13:22:14.892643] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.220 [2024-07-15 13:22:14.892647] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.220 [2024-07-15 13:22:14.892651] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x193c970) 00:23:18.220 [2024-07-15 13:22:14.892659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.220 [2024-07-15 13:22:14.892678] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19751d0, cid 0, qid 0 00:23:18.220 [2024-07-15 13:22:14.892733] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.220 [2024-07-15 13:22:14.892740] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.220 [2024-07-15 13:22:14.892743] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.220 [2024-07-15 13:22:14.892748] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19751d0) on tqpair=0x193c970 00:23:18.220 [2024-07-15 13:22:14.892754] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:18.220 [2024-07-15 13:22:14.892765] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.220 [2024-07-15 13:22:14.892770] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.220 [2024-07-15 13:22:14.892774] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x193c970) 00:23:18.220 [2024-07-15 13:22:14.892781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.220 [2024-07-15 13:22:14.892800] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19751d0, cid 0, qid 0 00:23:18.220 [2024-07-15 13:22:14.892860] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.220 [2024-07-15 13:22:14.892867] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.220 [2024-07-15 13:22:14.892871] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.220 [2024-07-15 13:22:14.892875] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19751d0) on tqpair=0x193c970 00:23:18.220 [2024-07-15 13:22:14.892881] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:18.220 [2024-07-15 13:22:14.892887] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:18.220 [2024-07-15 13:22:14.892895] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:18.220 [2024-07-15 13:22:14.893001] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:18.220 [2024-07-15 13:22:14.893006] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:18.220 [2024-07-15 13:22:14.893016] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.220 [2024-07-15 13:22:14.893020] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.220 [2024-07-15 13:22:14.893024] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x193c970) 00:23:18.220 [2024-07-15 13:22:14.893032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.220 [2024-07-15 13:22:14.893051] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19751d0, cid 0, qid 0 00:23:18.220 [2024-07-15 13:22:14.893109] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.220 [2024-07-15 13:22:14.893121] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.220 [2024-07-15 13:22:14.893126] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.220 [2024-07-15 13:22:14.893130] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19751d0) on tqpair=0x193c970 00:23:18.220 [2024-07-15 13:22:14.893137] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:18.220 [2024-07-15 13:22:14.893148] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.220 [2024-07-15 13:22:14.893152] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.220 [2024-07-15 13:22:14.893156] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x193c970) 00:23:18.220 [2024-07-15 13:22:14.893164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.220 [2024-07-15 13:22:14.893183] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19751d0, cid 0, qid 0 00:23:18.220 [2024-07-15 13:22:14.893253] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.220 [2024-07-15 13:22:14.893262] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.220 [2024-07-15 13:22:14.893266] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.220 [2024-07-15 13:22:14.893270] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19751d0) on tqpair=0x193c970 00:23:18.220 [2024-07-15 13:22:14.893276] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:18.220 [2024-07-15 13:22:14.893282] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:18.220 [2024-07-15 13:22:14.893290] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:18.220 [2024-07-15 13:22:14.893302] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:18.220 [2024-07-15 13:22:14.893313] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.220 [2024-07-15 13:22:14.893318] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x193c970) 00:23:18.220 [2024-07-15 13:22:14.893326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.220 [2024-07-15 13:22:14.893349] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19751d0, cid 0, qid 0 00:23:18.220 [2024-07-15 13:22:14.893456] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:18.220 [2024-07-15 13:22:14.893463] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:18.220 [2024-07-15 13:22:14.893467] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:18.220 [2024-07-15 13:22:14.893471] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x193c970): datao=0, datal=4096, cccid=0 00:23:18.220 [2024-07-15 13:22:14.893477] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19751d0) on tqpair(0x193c970): expected_datao=0, payload_size=4096 00:23:18.220 [2024-07-15 13:22:14.893482] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.220 [2024-07-15 13:22:14.893492] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:18.220 [2024-07-15 13:22:14.893497] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:18.220 [2024-07-15 13:22:14.893506] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.220 [2024-07-15 13:22:14.893512] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.220 [2024-07-15 13:22:14.893516] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.220 [2024-07-15 13:22:14.893520] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19751d0) on tqpair=0x193c970 00:23:18.220 [2024-07-15 13:22:14.893531] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:18.220 [2024-07-15 13:22:14.893536] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:18.220 [2024-07-15 13:22:14.893541] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:18.220 [2024-07-15 13:22:14.893546] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:18.220 [2024-07-15 13:22:14.893552] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:18.220 [2024-07-15 13:22:14.893557] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:18.220 [2024-07-15 13:22:14.893571] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:18.220 [2024-07-15 13:22:14.893580] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.220 [2024-07-15 13:22:14.893584] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.220 [2024-07-15 13:22:14.893589] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x193c970) 00:23:18.221 [2024-07-15 13:22:14.893597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:18.221 [2024-07-15 13:22:14.893618] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19751d0, cid 0, qid 0 00:23:18.221 [2024-07-15 13:22:14.893676] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.221 [2024-07-15 13:22:14.893683] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.221 [2024-07-15 13:22:14.893687] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.893691] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19751d0) on tqpair=0x193c970 00:23:18.221 [2024-07-15 13:22:14.893700] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.893705] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.893708] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x193c970) 00:23:18.221 [2024-07-15 13:22:14.893715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.221 [2024-07-15 13:22:14.893723] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.893727] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.893731] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x193c970) 00:23:18.221 [2024-07-15 13:22:14.893737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.221 [2024-07-15 13:22:14.893744] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.893748] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.893752] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x193c970) 00:23:18.221 [2024-07-15 13:22:14.893758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.221 [2024-07-15 13:22:14.893764] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.893768] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.893772] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.221 [2024-07-15 13:22:14.893778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.221 [2024-07-15 13:22:14.893784] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:18.221 [2024-07-15 13:22:14.893793] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:18.221 [2024-07-15 13:22:14.893800] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.893804] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x193c970) 00:23:18.221 [2024-07-15 13:22:14.893812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.221 [2024-07-15 13:22:14.893837] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19751d0, cid 0, qid 0 00:23:18.221 [2024-07-15 13:22:14.893845] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1975330, cid 1, qid 0 00:23:18.221 [2024-07-15 13:22:14.893850] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1975490, cid 2, qid 0 00:23:18.221 [2024-07-15 13:22:14.893855] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.221 [2024-07-15 13:22:14.893860] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1975750, cid 4, qid 0 00:23:18.221 [2024-07-15 13:22:14.893956] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.221 [2024-07-15 13:22:14.893963] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.221 [2024-07-15 13:22:14.893966] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.893971] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1975750) on tqpair=0x193c970 00:23:18.221 [2024-07-15 13:22:14.893977] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:18.221 [2024-07-15 13:22:14.893983] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:18.221 [2024-07-15 13:22:14.893992] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:18.221 [2024-07-15 13:22:14.894000] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:18.221 [2024-07-15 13:22:14.894007] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.894011] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.894015] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x193c970) 00:23:18.221 [2024-07-15 13:22:14.894023] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:18.221 [2024-07-15 13:22:14.894041] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1975750, cid 4, qid 0 00:23:18.221 [2024-07-15 13:22:14.894097] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.221 [2024-07-15 13:22:14.894104] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.221 [2024-07-15 13:22:14.894107] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.894111] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1975750) on tqpair=0x193c970 00:23:18.221 [2024-07-15 13:22:14.894179] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:18.221 [2024-07-15 13:22:14.894191] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:18.221 [2024-07-15 13:22:14.894200] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.894217] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x193c970) 00:23:18.221 [2024-07-15 13:22:14.894227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.221 [2024-07-15 13:22:14.894249] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1975750, cid 4, qid 0 00:23:18.221 [2024-07-15 13:22:14.894321] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:18.221 [2024-07-15 13:22:14.894328] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:18.221 [2024-07-15 13:22:14.894331] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.894335] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x193c970): datao=0, datal=4096, cccid=4 00:23:18.221 [2024-07-15 13:22:14.894341] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1975750) on tqpair(0x193c970): expected_datao=0, payload_size=4096 00:23:18.221 [2024-07-15 13:22:14.894345] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.894353] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.894357] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.894366] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.221 [2024-07-15 13:22:14.894372] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.221 [2024-07-15 13:22:14.894376] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.894380] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1975750) on tqpair=0x193c970 00:23:18.221 [2024-07-15 13:22:14.894397] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:18.221 [2024-07-15 13:22:14.894408] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:18.221 [2024-07-15 13:22:14.894419] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:18.221 [2024-07-15 13:22:14.894428] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.894432] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x193c970) 00:23:18.221 [2024-07-15 13:22:14.894439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.221 [2024-07-15 13:22:14.894460] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1975750, cid 4, qid 0 00:23:18.221 [2024-07-15 13:22:14.894533] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:18.221 [2024-07-15 13:22:14.894540] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:18.221 [2024-07-15 13:22:14.894544] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:18.221 [2024-07-15 13:22:14.894548] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x193c970): datao=0, datal=4096, cccid=4 00:23:18.221 [2024-07-15 13:22:14.894553] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1975750) on tqpair(0x193c970): expected_datao=0, payload_size=4096 00:23:18.222 [2024-07-15 13:22:14.894558] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.222 [2024-07-15 13:22:14.894565] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:18.222 [2024-07-15 13:22:14.894569] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:18.222 [2024-07-15 13:22:14.894578] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.222 [2024-07-15 13:22:14.894584] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.222 [2024-07-15 13:22:14.894587] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.222 [2024-07-15 13:22:14.894592] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1975750) on tqpair=0x193c970 00:23:18.222 [2024-07-15 13:22:14.894604] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:18.222 [2024-07-15 13:22:14.894615] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:18.222 [2024-07-15 13:22:14.894623] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.222 [2024-07-15 13:22:14.894628] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x193c970) 00:23:18.222 [2024-07-15 13:22:14.894635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.222 [2024-07-15 13:22:14.894655] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1975750, cid 4, qid 0 00:23:18.222 [2024-07-15 13:22:14.894739] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:18.222 [2024-07-15 13:22:14.894747] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:18.222 [2024-07-15 13:22:14.894751] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:18.222 [2024-07-15 13:22:14.894755] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x193c970): datao=0, datal=4096, cccid=4 00:23:18.222 [2024-07-15 13:22:14.894759] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1975750) on tqpair(0x193c970): expected_datao=0, payload_size=4096 00:23:18.222 [2024-07-15 13:22:14.894764] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.222 [2024-07-15 13:22:14.894771] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:18.222 [2024-07-15 13:22:14.894775] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:18.222 [2024-07-15 13:22:14.894784] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.222 [2024-07-15 13:22:14.894790] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.222 [2024-07-15 13:22:14.894794] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.222 [2024-07-15 13:22:14.894798] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1975750) on tqpair=0x193c970 00:23:18.222 [2024-07-15 13:22:14.894808] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:18.222 [2024-07-15 13:22:14.894817] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:18.222 [2024-07-15 13:22:14.894828] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:18.222 [2024-07-15 13:22:14.894835] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:18.222 [2024-07-15 13:22:14.894841] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:18.222 [2024-07-15 13:22:14.894847] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:18.222 [2024-07-15 13:22:14.894852] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:18.222 [2024-07-15 13:22:14.894858] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:18.222 [2024-07-15 13:22:14.894883] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.222 [2024-07-15 13:22:14.894889] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x193c970) 00:23:18.222 [2024-07-15 13:22:14.894896] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.222 [2024-07-15 13:22:14.894904] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.222 [2024-07-15 13:22:14.894908] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.222 [2024-07-15 13:22:14.894912] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x193c970) 00:23:18.222 [2024-07-15 13:22:14.894918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.222 [2024-07-15 13:22:14.894945] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1975750, cid 4, qid 0 00:23:18.222 [2024-07-15 13:22:14.894953] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19758b0, cid 5, qid 0 00:23:18.222 [2024-07-15 13:22:14.895033] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.222 [2024-07-15 13:22:14.895047] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.222 [2024-07-15 13:22:14.895052] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.222 [2024-07-15 13:22:14.895056] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1975750) on tqpair=0x193c970 00:23:18.222 [2024-07-15 13:22:14.895065] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.222 [2024-07-15 13:22:14.895071] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.222 [2024-07-15 13:22:14.895075] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.222 [2024-07-15 13:22:14.895079] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19758b0) on tqpair=0x193c970 00:23:18.222 [2024-07-15 13:22:14.895092] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.222 [2024-07-15 13:22:14.895096] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x193c970) 00:23:18.222 [2024-07-15 13:22:14.895104] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.222 [2024-07-15 13:22:14.895124] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19758b0, cid 5, qid 0 00:23:18.222 [2024-07-15 13:22:14.895183] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.222 [2024-07-15 13:22:14.895190] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.222 [2024-07-15 13:22:14.895193] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.222 [2024-07-15 13:22:14.895198] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19758b0) on tqpair=0x193c970 00:23:18.223 [2024-07-15 13:22:14.895226] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895233] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x193c970) 00:23:18.223 [2024-07-15 13:22:14.895240] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.223 [2024-07-15 13:22:14.895261] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19758b0, cid 5, qid 0 00:23:18.223 [2024-07-15 13:22:14.895322] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.223 [2024-07-15 13:22:14.895329] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.223 [2024-07-15 13:22:14.895332] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895337] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19758b0) on tqpair=0x193c970 00:23:18.223 [2024-07-15 13:22:14.895348] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895353] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x193c970) 00:23:18.223 [2024-07-15 13:22:14.895360] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.223 [2024-07-15 13:22:14.895378] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19758b0, cid 5, qid 0 00:23:18.223 [2024-07-15 13:22:14.895434] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.223 [2024-07-15 13:22:14.895441] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.223 [2024-07-15 13:22:14.895445] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895449] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19758b0) on tqpair=0x193c970 00:23:18.223 [2024-07-15 13:22:14.895464] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895469] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x193c970) 00:23:18.223 [2024-07-15 13:22:14.895476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.223 [2024-07-15 13:22:14.895484] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895488] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x193c970) 00:23:18.223 [2024-07-15 13:22:14.895495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.223 [2024-07-15 13:22:14.895503] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895507] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x193c970) 00:23:18.223 [2024-07-15 13:22:14.895513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.223 [2024-07-15 13:22:14.895522] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895526] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x193c970) 00:23:18.223 [2024-07-15 13:22:14.895532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.223 [2024-07-15 13:22:14.895552] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19758b0, cid 5, qid 0 00:23:18.223 [2024-07-15 13:22:14.895559] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1975750, cid 4, qid 0 00:23:18.223 [2024-07-15 13:22:14.895564] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1975a10, cid 6, qid 0 00:23:18.223 [2024-07-15 13:22:14.895569] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1975b70, cid 7, qid 0 00:23:18.223 [2024-07-15 13:22:14.895716] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:18.223 [2024-07-15 13:22:14.895728] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:18.223 [2024-07-15 13:22:14.895732] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895736] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x193c970): datao=0, datal=8192, cccid=5 00:23:18.223 [2024-07-15 13:22:14.895741] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19758b0) on tqpair(0x193c970): expected_datao=0, payload_size=8192 00:23:18.223 [2024-07-15 13:22:14.895746] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895764] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895769] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895775] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:18.223 [2024-07-15 13:22:14.895781] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:18.223 [2024-07-15 13:22:14.895785] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895789] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x193c970): datao=0, datal=512, cccid=4 00:23:18.223 [2024-07-15 13:22:14.895794] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1975750) on tqpair(0x193c970): expected_datao=0, payload_size=512 00:23:18.223 [2024-07-15 13:22:14.895798] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895805] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895809] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895815] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:18.223 [2024-07-15 13:22:14.895820] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:18.223 [2024-07-15 13:22:14.895824] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895828] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x193c970): datao=0, datal=512, cccid=6 00:23:18.223 [2024-07-15 13:22:14.895833] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1975a10) on tqpair(0x193c970): expected_datao=0, payload_size=512 00:23:18.223 [2024-07-15 13:22:14.895837] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895844] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895847] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895853] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:18.223 [2024-07-15 13:22:14.895859] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:18.223 [2024-07-15 13:22:14.895863] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895867] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x193c970): datao=0, datal=4096, cccid=7 00:23:18.223 [2024-07-15 13:22:14.895871] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1975b70) on tqpair(0x193c970): expected_datao=0, payload_size=4096 00:23:18.223 [2024-07-15 13:22:14.895876] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895883] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895887] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895901] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.223 [2024-07-15 13:22:14.895907] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.223 [2024-07-15 13:22:14.895911] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895915] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19758b0) on tqpair=0x193c970 00:23:18.223 [2024-07-15 13:22:14.895933] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.223 [2024-07-15 13:22:14.895940] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.223 [2024-07-15 13:22:14.895943] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.223 [2024-07-15 13:22:14.895947] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1975750) on tqpair=0x193c970 00:23:18.223 ===================================================== 00:23:18.223 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:18.223 ===================================================== 00:23:18.223 Controller Capabilities/Features 00:23:18.223 ================================ 00:23:18.223 Vendor ID: 8086 00:23:18.223 Subsystem Vendor ID: 8086 00:23:18.223 Serial Number: SPDK00000000000001 00:23:18.223 Model Number: SPDK bdev Controller 00:23:18.223 Firmware Version: 24.05.1 00:23:18.223 Recommended Arb Burst: 6 00:23:18.223 IEEE OUI Identifier: e4 d2 5c 00:23:18.223 Multi-path I/O 00:23:18.223 May have multiple subsystem ports: Yes 00:23:18.223 May have multiple controllers: Yes 00:23:18.223 Associated with SR-IOV VF: No 00:23:18.223 Max Data Transfer Size: 131072 00:23:18.223 Max Number of Namespaces: 32 00:23:18.223 Max Number of I/O Queues: 127 00:23:18.223 NVMe Specification Version (VS): 1.3 00:23:18.223 NVMe Specification Version (Identify): 1.3 00:23:18.223 Maximum Queue Entries: 128 00:23:18.223 Contiguous Queues Required: Yes 00:23:18.223 Arbitration Mechanisms Supported 00:23:18.224 Weighted Round Robin: Not Supported 00:23:18.224 Vendor Specific: Not Supported 00:23:18.224 Reset Timeout: 15000 ms 00:23:18.224 Doorbell Stride: 4 bytes 00:23:18.224 NVM Subsystem Reset: Not Supported 00:23:18.224 Command Sets Supported 00:23:18.224 NVM Command Set: Supported 00:23:18.224 Boot Partition: Not Supported 00:23:18.224 Memory Page Size Minimum: 4096 bytes 00:23:18.224 Memory Page Size Maximum: 4096 bytes 00:23:18.224 Persistent Memory Region: Not Supported 00:23:18.224 Optional Asynchronous Events Supported 00:23:18.224 Namespace Attribute Notices: Supported 00:23:18.224 Firmware Activation Notices: Not Supported 00:23:18.224 ANA Change Notices: Not Supported 00:23:18.224 PLE Aggregate Log Change Notices: Not Supported 00:23:18.224 LBA Status Info Alert Notices: Not Supported 00:23:18.224 EGE Aggregate Log Change Notices: Not Supported 00:23:18.224 Normal NVM Subsystem Shutdown event: Not Supported 00:23:18.224 Zone Descriptor Change Notices: Not Supported 00:23:18.224 Discovery Log Change Notices: Not Supported 00:23:18.224 Controller Attributes 00:23:18.224 128-bit Host Identifier: Supported 00:23:18.224 Non-Operational Permissive Mode: Not Supported 00:23:18.224 NVM Sets: Not Supported 00:23:18.224 Read Recovery Levels: Not Supported 00:23:18.224 Endurance Groups: Not Supported 00:23:18.224 Predictable Latency Mode: Not Supported 00:23:18.224 Traffic Based Keep ALive: Not Supported 00:23:18.224 Namespace Granularity: Not Supported 00:23:18.224 SQ Associations: Not Supported 00:23:18.224 UUID List: Not Supported 00:23:18.224 Multi-Domain Subsystem: Not Supported 00:23:18.224 Fixed Capacity Management: Not Supported 00:23:18.224 Variable Capacity Management: Not Supported 00:23:18.224 Delete Endurance Group: Not Supported 00:23:18.224 Delete NVM Set: Not Supported 00:23:18.224 Extended LBA Formats Supported: Not Supported 00:23:18.224 Flexible Data Placement Supported: Not Supported 00:23:18.224 00:23:18.224 Controller Memory Buffer Support 00:23:18.224 ================================ 00:23:18.224 Supported: No 00:23:18.224 00:23:18.224 Persistent Memory Region Support 00:23:18.224 ================================ 00:23:18.224 Supported: No 00:23:18.224 00:23:18.224 Admin Command Set Attributes 00:23:18.224 ============================ 00:23:18.224 Security Send/Receive: Not Supported 00:23:18.224 Format NVM: Not Supported 00:23:18.224 Firmware Activate/Download: Not Supported 00:23:18.224 Namespace Management: Not Supported 00:23:18.224 Device Self-Test: Not Supported 00:23:18.224 Directives: Not Supported 00:23:18.224 NVMe-MI: Not Supported 00:23:18.224 Virtualization Management: Not Supported 00:23:18.224 Doorbell Buffer Config: Not Supported 00:23:18.224 Get LBA Status Capability: Not Supported 00:23:18.224 Command & Feature Lockdown Capability: Not Supported 00:23:18.224 Abort Command Limit: 4 00:23:18.224 Async Event Request Limit: 4 00:23:18.224 Number of Firmware Slots: N/A 00:23:18.224 Firmware Slot 1 Read-Only: N/A 00:23:18.224 Firmware Activation Without Reset: N/A 00:23:18.224 Multiple Update Detection Support: N/A 00:23:18.224 Firmware Update Granularity: No Information Provided 00:23:18.224 Per-Namespace SMART Log: No 00:23:18.224 Asymmetric Namespace Access Log Page: Not Supported 00:23:18.224 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:18.224 Command Effects Log Page: Supported 00:23:18.224 Get Log Page Extended Data: Supported 00:23:18.224 Telemetry Log Pages: Not Supported 00:23:18.224 Persistent Event Log Pages: Not Supported 00:23:18.224 Supported Log Pages Log Page: May Support 00:23:18.224 Commands Supported & Effects Log Page: Not Supported 00:23:18.224 Feature Identifiers & Effects Log Page:May Support 00:23:18.224 NVMe-MI Commands & Effects Log Page: May Support 00:23:18.224 Data Area 4 for Telemetry Log: Not Supported 00:23:18.224 Error Log Page Entries Supported: 128 00:23:18.224 Keep Alive: Supported 00:23:18.224 Keep Alive Granularity: 10000 ms 00:23:18.224 00:23:18.224 NVM Command Set Attributes 00:23:18.224 ========================== 00:23:18.224 Submission Queue Entry Size 00:23:18.224 Max: 64 00:23:18.224 Min: 64 00:23:18.224 Completion Queue Entry Size 00:23:18.224 Max: 16 00:23:18.224 Min: 16 00:23:18.224 Number of Namespaces: 32 00:23:18.224 Compare Command: Supported 00:23:18.224 Write Uncorrectable Command: Not Supported 00:23:18.224 Dataset Management Command: Supported 00:23:18.224 Write Zeroes Command: Supported 00:23:18.224 Set Features Save Field: Not Supported 00:23:18.224 Reservations: Supported 00:23:18.224 Timestamp: Not Supported 00:23:18.224 Copy: Supported 00:23:18.224 Volatile Write Cache: Present 00:23:18.224 Atomic Write Unit (Normal): 1 00:23:18.224 Atomic Write Unit (PFail): 1 00:23:18.224 Atomic Compare & Write Unit: 1 00:23:18.224 Fused Compare & Write: Supported 00:23:18.224 Scatter-Gather List 00:23:18.224 SGL Command Set: Supported 00:23:18.224 SGL Keyed: Supported 00:23:18.224 SGL Bit Bucket Descriptor: Not Supported 00:23:18.224 SGL Metadata Pointer: Not Supported 00:23:18.224 Oversized SGL: Not Supported 00:23:18.224 SGL Metadata Address: Not Supported 00:23:18.224 SGL Offset: Supported 00:23:18.224 Transport SGL Data Block: Not Supported 00:23:18.224 Replay Protected Memory Block: Not Supported 00:23:18.224 00:23:18.224 Firmware Slot Information 00:23:18.224 ========================= 00:23:18.224 Active slot: 1 00:23:18.224 Slot 1 Firmware Revision: 24.05.1 00:23:18.224 00:23:18.224 00:23:18.224 Commands Supported and Effects 00:23:18.224 ============================== 00:23:18.224 Admin Commands 00:23:18.224 -------------- 00:23:18.224 Get Log Page (02h): Supported 00:23:18.224 Identify (06h): Supported 00:23:18.224 Abort (08h): Supported 00:23:18.224 Set Features (09h): Supported 00:23:18.224 Get Features (0Ah): Supported 00:23:18.224 Asynchronous Event Request (0Ch): Supported 00:23:18.224 Keep Alive (18h): Supported 00:23:18.224 I/O Commands 00:23:18.224 ------------ 00:23:18.224 Flush (00h): Supported LBA-Change 00:23:18.224 Write (01h): Supported LBA-Change 00:23:18.224 Read (02h): Supported 00:23:18.224 Compare (05h): Supported 00:23:18.224 Write Zeroes (08h): Supported LBA-Change 00:23:18.224 Dataset Management (09h): Supported LBA-Change 00:23:18.224 Copy (19h): Supported LBA-Change 00:23:18.224 Unknown (79h): Supported LBA-Change 00:23:18.224 Unknown (7Ah): Supported 00:23:18.224 00:23:18.224 Error Log 00:23:18.224 ========= 00:23:18.224 00:23:18.224 Arbitration 00:23:18.224 =========== 00:23:18.224 Arbitration Burst: 1 00:23:18.224 00:23:18.224 Power Management 00:23:18.224 ================ 00:23:18.224 Number of Power States: 1 00:23:18.224 Current Power State: Power State #0 00:23:18.224 Power State #0: 00:23:18.225 Max Power: 0.00 W 00:23:18.225 Non-Operational State: Operational 00:23:18.225 Entry Latency: Not Reported 00:23:18.225 Exit Latency: Not Reported 00:23:18.225 Relative Read Throughput: 0 00:23:18.225 Relative Read Latency: 0 00:23:18.225 Relative Write Throughput: 0 00:23:18.225 Relative Write Latency: 0 00:23:18.225 Idle Power: Not Reported 00:23:18.225 Active Power: Not Reported 00:23:18.225 Non-Operational Permissive Mode: Not Supported 00:23:18.225 00:23:18.225 Health Information 00:23:18.225 ================== 00:23:18.225 Critical Warnings: 00:23:18.225 Available Spare Space: OK 00:23:18.225 Temperature: OK 00:23:18.225 Device Reliability: OK 00:23:18.225 Read Only: No 00:23:18.225 Volatile Memory Backup: OK 00:23:18.225 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:18.225 Temperature Threshold: [2024-07-15 13:22:14.895959] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.225 [2024-07-15 13:22:14.895965] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.225 [2024-07-15 13:22:14.895969] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.225 [2024-07-15 13:22:14.895973] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1975a10) on tqpair=0x193c970 00:23:18.225 [2024-07-15 13:22:14.895987] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.225 [2024-07-15 13:22:14.895994] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.225 [2024-07-15 13:22:14.895998] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.225 [2024-07-15 13:22:14.896002] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1975b70) on tqpair=0x193c970 00:23:18.225 [2024-07-15 13:22:14.896118] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.225 [2024-07-15 13:22:14.896125] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x193c970) 00:23:18.225 [2024-07-15 13:22:14.896134] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.225 [2024-07-15 13:22:14.896159] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1975b70, cid 7, qid 0 00:23:18.225 [2024-07-15 13:22:14.900223] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.225 [2024-07-15 13:22:14.900244] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.225 [2024-07-15 13:22:14.900249] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.225 [2024-07-15 13:22:14.900254] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1975b70) on tqpair=0x193c970 00:23:18.225 [2024-07-15 13:22:14.900321] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:18.225 [2024-07-15 13:22:14.900345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.225 [2024-07-15 13:22:14.900353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.225 [2024-07-15 13:22:14.900360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.225 [2024-07-15 13:22:14.900366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.225 [2024-07-15 13:22:14.900377] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.225 [2024-07-15 13:22:14.900382] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.225 [2024-07-15 13:22:14.900386] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.225 [2024-07-15 13:22:14.900395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.225 [2024-07-15 13:22:14.900428] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.225 [2024-07-15 13:22:14.900510] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.225 [2024-07-15 13:22:14.900517] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.225 [2024-07-15 13:22:14.900521] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.225 [2024-07-15 13:22:14.900526] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.225 [2024-07-15 13:22:14.900535] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.225 [2024-07-15 13:22:14.900540] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.225 [2024-07-15 13:22:14.900544] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.225 [2024-07-15 13:22:14.900553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.225 [2024-07-15 13:22:14.900576] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.225 [2024-07-15 13:22:14.900659] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.225 [2024-07-15 13:22:14.900666] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.225 [2024-07-15 13:22:14.900669] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.225 [2024-07-15 13:22:14.900674] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.225 [2024-07-15 13:22:14.900680] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:18.225 [2024-07-15 13:22:14.900685] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:18.225 [2024-07-15 13:22:14.900695] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.225 [2024-07-15 13:22:14.900700] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.225 [2024-07-15 13:22:14.900704] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.225 [2024-07-15 13:22:14.900712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.225 [2024-07-15 13:22:14.900730] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.225 [2024-07-15 13:22:14.900786] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.225 [2024-07-15 13:22:14.900792] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.225 [2024-07-15 13:22:14.900796] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.225 [2024-07-15 13:22:14.900800] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.225 [2024-07-15 13:22:14.900812] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.225 [2024-07-15 13:22:14.900817] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.225 [2024-07-15 13:22:14.900821] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.225 [2024-07-15 13:22:14.900829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.225 [2024-07-15 13:22:14.900847] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.225 [2024-07-15 13:22:14.900903] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.225 [2024-07-15 13:22:14.900909] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.225 [2024-07-15 13:22:14.900913] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.225 [2024-07-15 13:22:14.900917] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.225 [2024-07-15 13:22:14.900928] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.225 [2024-07-15 13:22:14.900933] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.225 [2024-07-15 13:22:14.900937] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.225 [2024-07-15 13:22:14.900944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.225 [2024-07-15 13:22:14.900962] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.225 [2024-07-15 13:22:14.901017] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.225 [2024-07-15 13:22:14.901023] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.225 [2024-07-15 13:22:14.901027] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.225 [2024-07-15 13:22:14.901031] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.225 [2024-07-15 13:22:14.901043] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.225 [2024-07-15 13:22:14.901047] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.225 [2024-07-15 13:22:14.901051] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.225 [2024-07-15 13:22:14.901059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.225 [2024-07-15 13:22:14.901077] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.225 [2024-07-15 13:22:14.901130] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.225 [2024-07-15 13:22:14.901136] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.226 [2024-07-15 13:22:14.901140] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.901144] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.226 [2024-07-15 13:22:14.901156] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.901160] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.901165] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.226 [2024-07-15 13:22:14.901172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.226 [2024-07-15 13:22:14.901190] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.226 [2024-07-15 13:22:14.901260] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.226 [2024-07-15 13:22:14.901269] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.226 [2024-07-15 13:22:14.901273] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.901278] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.226 [2024-07-15 13:22:14.901290] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.901295] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.901299] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.226 [2024-07-15 13:22:14.901306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.226 [2024-07-15 13:22:14.901327] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.226 [2024-07-15 13:22:14.901386] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.226 [2024-07-15 13:22:14.901393] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.226 [2024-07-15 13:22:14.901396] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.901401] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.226 [2024-07-15 13:22:14.901412] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.901417] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.901421] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.226 [2024-07-15 13:22:14.901428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.226 [2024-07-15 13:22:14.901447] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.226 [2024-07-15 13:22:14.901500] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.226 [2024-07-15 13:22:14.901506] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.226 [2024-07-15 13:22:14.901510] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.901514] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.226 [2024-07-15 13:22:14.901526] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.901530] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.901534] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.226 [2024-07-15 13:22:14.901542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.226 [2024-07-15 13:22:14.901560] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.226 [2024-07-15 13:22:14.901611] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.226 [2024-07-15 13:22:14.901630] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.226 [2024-07-15 13:22:14.901634] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.901638] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.226 [2024-07-15 13:22:14.901650] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.901655] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.901659] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.226 [2024-07-15 13:22:14.901667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.226 [2024-07-15 13:22:14.901685] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.226 [2024-07-15 13:22:14.901739] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.226 [2024-07-15 13:22:14.901746] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.226 [2024-07-15 13:22:14.901750] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.901755] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.226 [2024-07-15 13:22:14.901766] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.901771] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.901775] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.226 [2024-07-15 13:22:14.901783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.226 [2024-07-15 13:22:14.901801] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.226 [2024-07-15 13:22:14.901855] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.226 [2024-07-15 13:22:14.901875] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.226 [2024-07-15 13:22:14.901880] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.901884] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.226 [2024-07-15 13:22:14.901897] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.901902] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.901906] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.226 [2024-07-15 13:22:14.901914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.226 [2024-07-15 13:22:14.901934] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.226 [2024-07-15 13:22:14.901987] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.226 [2024-07-15 13:22:14.901999] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.226 [2024-07-15 13:22:14.902003] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.902008] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.226 [2024-07-15 13:22:14.902020] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.902025] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.902028] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.226 [2024-07-15 13:22:14.902036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.226 [2024-07-15 13:22:14.902055] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.226 [2024-07-15 13:22:14.902110] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.226 [2024-07-15 13:22:14.902117] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.226 [2024-07-15 13:22:14.902120] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.902124] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.226 [2024-07-15 13:22:14.902136] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.902141] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.902145] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.226 [2024-07-15 13:22:14.902152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.226 [2024-07-15 13:22:14.902171] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.226 [2024-07-15 13:22:14.902236] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.226 [2024-07-15 13:22:14.902244] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.226 [2024-07-15 13:22:14.902248] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.902252] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.226 [2024-07-15 13:22:14.902264] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.902269] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.226 [2024-07-15 13:22:14.902273] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.227 [2024-07-15 13:22:14.902281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 13:22:14.902301] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.227 [2024-07-15 13:22:14.902355] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.227 [2024-07-15 13:22:14.902362] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.227 [2024-07-15 13:22:14.902366] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.902370] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.227 [2024-07-15 13:22:14.902381] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.902386] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.902390] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.227 [2024-07-15 13:22:14.902397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 13:22:14.902416] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.227 [2024-07-15 13:22:14.902472] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.227 [2024-07-15 13:22:14.902479] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.227 [2024-07-15 13:22:14.902483] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.902487] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.227 [2024-07-15 13:22:14.902498] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.902503] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.902507] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.227 [2024-07-15 13:22:14.902514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 13:22:14.902533] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.227 [2024-07-15 13:22:14.902586] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.227 [2024-07-15 13:22:14.902593] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.227 [2024-07-15 13:22:14.902597] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.902601] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.227 [2024-07-15 13:22:14.902612] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.902617] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.902621] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.227 [2024-07-15 13:22:14.902628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 13:22:14.902647] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.227 [2024-07-15 13:22:14.902717] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.227 [2024-07-15 13:22:14.902724] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.227 [2024-07-15 13:22:14.902728] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.902732] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.227 [2024-07-15 13:22:14.902744] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.902749] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.902753] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.227 [2024-07-15 13:22:14.902760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 13:22:14.902780] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.227 [2024-07-15 13:22:14.902835] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.227 [2024-07-15 13:22:14.902842] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.227 [2024-07-15 13:22:14.902846] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.902850] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.227 [2024-07-15 13:22:14.902861] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.902866] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.902870] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.227 [2024-07-15 13:22:14.902878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 13:22:14.902896] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.227 [2024-07-15 13:22:14.902950] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.227 [2024-07-15 13:22:14.902956] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.227 [2024-07-15 13:22:14.902960] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.902965] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.227 [2024-07-15 13:22:14.902976] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.902981] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.902985] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.227 [2024-07-15 13:22:14.902992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 13:22:14.903011] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.227 [2024-07-15 13:22:14.903064] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.227 [2024-07-15 13:22:14.903071] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.227 [2024-07-15 13:22:14.903075] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.903079] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.227 [2024-07-15 13:22:14.903091] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.903095] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.903099] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.227 [2024-07-15 13:22:14.903107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 13:22:14.903125] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.227 [2024-07-15 13:22:14.903183] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.227 [2024-07-15 13:22:14.903189] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.227 [2024-07-15 13:22:14.903193] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.903197] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.227 [2024-07-15 13:22:14.903220] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.903227] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.903231] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.227 [2024-07-15 13:22:14.903239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 13:22:14.903260] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.227 [2024-07-15 13:22:14.903321] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.227 [2024-07-15 13:22:14.903328] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.227 [2024-07-15 13:22:14.903332] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.903336] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.227 [2024-07-15 13:22:14.903347] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.903352] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.227 [2024-07-15 13:22:14.903356] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.227 [2024-07-15 13:22:14.903363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.227 [2024-07-15 13:22:14.903382] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.228 [2024-07-15 13:22:14.903434] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.228 [2024-07-15 13:22:14.903441] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.228 [2024-07-15 13:22:14.903445] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.903449] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.228 [2024-07-15 13:22:14.903460] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.903465] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.903469] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.228 [2024-07-15 13:22:14.903476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.228 [2024-07-15 13:22:14.903495] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.228 [2024-07-15 13:22:14.903550] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.228 [2024-07-15 13:22:14.903556] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.228 [2024-07-15 13:22:14.903560] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.903564] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.228 [2024-07-15 13:22:14.903576] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.903581] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.903584] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.228 [2024-07-15 13:22:14.903592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.228 [2024-07-15 13:22:14.903610] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.228 [2024-07-15 13:22:14.903663] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.228 [2024-07-15 13:22:14.903669] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.228 [2024-07-15 13:22:14.903673] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.903677] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.228 [2024-07-15 13:22:14.903689] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.903693] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.903697] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.228 [2024-07-15 13:22:14.903705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.228 [2024-07-15 13:22:14.903723] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.228 [2024-07-15 13:22:14.903777] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.228 [2024-07-15 13:22:14.903783] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.228 [2024-07-15 13:22:14.903787] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.903791] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.228 [2024-07-15 13:22:14.903803] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.903807] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.903811] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.228 [2024-07-15 13:22:14.903819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.228 [2024-07-15 13:22:14.903837] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.228 [2024-07-15 13:22:14.903895] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.228 [2024-07-15 13:22:14.903902] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.228 [2024-07-15 13:22:14.903906] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.903910] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.228 [2024-07-15 13:22:14.903922] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.903926] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.903930] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.228 [2024-07-15 13:22:14.903938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.228 [2024-07-15 13:22:14.903956] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.228 [2024-07-15 13:22:14.904008] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.228 [2024-07-15 13:22:14.904014] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.228 [2024-07-15 13:22:14.904018] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.904023] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.228 [2024-07-15 13:22:14.904034] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.904038] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.904043] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.228 [2024-07-15 13:22:14.904050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.228 [2024-07-15 13:22:14.904068] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.228 [2024-07-15 13:22:14.904134] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.228 [2024-07-15 13:22:14.904141] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.228 [2024-07-15 13:22:14.904145] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.904149] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.228 [2024-07-15 13:22:14.904160] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.904165] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.904169] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.228 [2024-07-15 13:22:14.904176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.228 [2024-07-15 13:22:14.904195] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.228 [2024-07-15 13:22:14.908238] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.228 [2024-07-15 13:22:14.908255] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.228 [2024-07-15 13:22:14.908260] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.908265] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.228 [2024-07-15 13:22:14.908281] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:18.228 [2024-07-15 13:22:14.908286] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:18.229 [2024-07-15 13:22:14.908290] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193c970) 00:23:18.229 [2024-07-15 13:22:14.908300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.229 [2024-07-15 13:22:14.908327] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19755f0, cid 3, qid 0 00:23:18.229 [2024-07-15 13:22:14.908406] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:18.229 [2024-07-15 13:22:14.908413] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:18.229 [2024-07-15 13:22:14.908417] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:18.229 [2024-07-15 13:22:14.908421] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19755f0) on tqpair=0x193c970 00:23:18.229 [2024-07-15 13:22:14.908431] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:23:18.229 0 Kelvin (-273 Celsius) 00:23:18.229 Available Spare: 0% 00:23:18.229 Available Spare Threshold: 0% 00:23:18.229 Life Percentage Used: 0% 00:23:18.229 Data Units Read: 0 00:23:18.229 Data Units Written: 0 00:23:18.229 Host Read Commands: 0 00:23:18.229 Host Write Commands: 0 00:23:18.229 Controller Busy Time: 0 minutes 00:23:18.229 Power Cycles: 0 00:23:18.229 Power On Hours: 0 hours 00:23:18.229 Unsafe Shutdowns: 0 00:23:18.229 Unrecoverable Media Errors: 0 00:23:18.229 Lifetime Error Log Entries: 0 00:23:18.229 Warning Temperature Time: 0 minutes 00:23:18.229 Critical Temperature Time: 0 minutes 00:23:18.229 00:23:18.229 Number of Queues 00:23:18.229 ================ 00:23:18.229 Number of I/O Submission Queues: 127 00:23:18.229 Number of I/O Completion Queues: 127 00:23:18.229 00:23:18.229 Active Namespaces 00:23:18.229 ================= 00:23:18.229 Namespace ID:1 00:23:18.229 Error Recovery Timeout: Unlimited 00:23:18.229 Command Set Identifier: NVM (00h) 00:23:18.229 Deallocate: Supported 00:23:18.229 Deallocated/Unwritten Error: Not Supported 00:23:18.229 Deallocated Read Value: Unknown 00:23:18.229 Deallocate in Write Zeroes: Not Supported 00:23:18.229 Deallocated Guard Field: 0xFFFF 00:23:18.229 Flush: Supported 00:23:18.229 Reservation: Supported 00:23:18.229 Namespace Sharing Capabilities: Multiple Controllers 00:23:18.229 Size (in LBAs): 131072 (0GiB) 00:23:18.229 Capacity (in LBAs): 131072 (0GiB) 00:23:18.229 Utilization (in LBAs): 131072 (0GiB) 00:23:18.229 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:18.229 EUI64: ABCDEF0123456789 00:23:18.229 UUID: 0a3668a2-9e5a-4ed1-9abf-b28d5b90cbd2 00:23:18.229 Thin Provisioning: Not Supported 00:23:18.229 Per-NS Atomic Units: Yes 00:23:18.229 Atomic Boundary Size (Normal): 0 00:23:18.229 Atomic Boundary Size (PFail): 0 00:23:18.229 Atomic Boundary Offset: 0 00:23:18.229 Maximum Single Source Range Length: 65535 00:23:18.229 Maximum Copy Length: 65535 00:23:18.229 Maximum Source Range Count: 1 00:23:18.229 NGUID/EUI64 Never Reused: No 00:23:18.229 Namespace Write Protected: No 00:23:18.229 Number of LBA Formats: 1 00:23:18.229 Current LBA Format: LBA Format #00 00:23:18.229 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:18.229 00:23:18.229 13:22:14 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:18.488 13:22:14 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:18.488 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.488 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:18.488 13:22:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.488 13:22:14 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:18.488 13:22:14 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:18.488 13:22:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:18.488 13:22:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:18.488 13:22:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:18.488 13:22:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:18.488 13:22:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:18.488 13:22:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:18.488 rmmod nvme_tcp 00:23:18.488 rmmod nvme_fabrics 00:23:18.488 rmmod nvme_keyring 00:23:18.488 13:22:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:18.488 13:22:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:18.488 13:22:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:18.488 13:22:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 104420 ']' 00:23:18.488 13:22:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 104420 00:23:18.488 13:22:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 104420 ']' 00:23:18.488 13:22:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 104420 00:23:18.488 13:22:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:23:18.488 13:22:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:18.488 13:22:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 104420 00:23:18.488 13:22:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:18.488 killing process with pid 104420 00:23:18.488 13:22:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:18.488 13:22:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 104420' 00:23:18.488 13:22:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 104420 00:23:18.488 13:22:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 104420 00:23:18.746 13:22:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:18.746 13:22:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:18.746 13:22:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:18.746 13:22:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:18.746 13:22:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:18.746 13:22:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.746 13:22:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:18.746 13:22:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.746 13:22:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:18.746 00:23:18.746 real 0m2.687s 00:23:18.746 user 0m7.551s 00:23:18.746 sys 0m0.693s 00:23:18.746 13:22:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:18.746 ************************************ 00:23:18.746 END TEST nvmf_identify 00:23:18.746 ************************************ 00:23:18.746 13:22:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:18.746 13:22:15 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:18.746 13:22:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:18.746 13:22:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:18.746 13:22:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:18.746 ************************************ 00:23:18.746 START TEST nvmf_perf 00:23:18.746 ************************************ 00:23:18.746 13:22:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:18.746 * Looking for test storage... 00:23:19.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:19.005 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:19.006 Cannot find device "nvmf_tgt_br" 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:19.006 Cannot find device "nvmf_tgt_br2" 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:19.006 Cannot find device "nvmf_tgt_br" 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:19.006 Cannot find device "nvmf_tgt_br2" 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:19.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:19.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:19.006 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:19.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:19.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:23:19.264 00:23:19.264 --- 10.0.0.2 ping statistics --- 00:23:19.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.264 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:19.264 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:19.264 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:23:19.264 00:23:19.264 --- 10.0.0.3 ping statistics --- 00:23:19.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.264 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:19.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:19.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:23:19.264 00:23:19.264 --- 10.0.0.1 ping statistics --- 00:23:19.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.264 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:19.264 13:22:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:19.265 13:22:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:19.265 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=104646 00:23:19.265 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:19.265 13:22:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 104646 00:23:19.265 13:22:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 104646 ']' 00:23:19.265 13:22:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.265 13:22:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:19.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.265 13:22:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.265 13:22:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:19.265 13:22:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:19.265 [2024-07-15 13:22:15.943680] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:19.265 [2024-07-15 13:22:15.943783] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.522 [2024-07-15 13:22:16.081326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:19.522 [2024-07-15 13:22:16.187194] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.522 [2024-07-15 13:22:16.187289] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.522 [2024-07-15 13:22:16.187304] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.522 [2024-07-15 13:22:16.187315] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.522 [2024-07-15 13:22:16.187324] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.522 [2024-07-15 13:22:16.187742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.522 [2024-07-15 13:22:16.187906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.523 [2024-07-15 13:22:16.188185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:19.523 [2024-07-15 13:22:16.188193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.455 13:22:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:20.455 13:22:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:23:20.455 13:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:20.455 13:22:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:20.455 13:22:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:20.455 13:22:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.455 13:22:16 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:23:20.455 13:22:16 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:20.713 13:22:17 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:23:20.713 13:22:17 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:20.972 13:22:17 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:23:20.972 13:22:17 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:21.538 13:22:18 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:21.538 13:22:18 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:23:21.538 13:22:18 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:21.538 13:22:18 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:21.538 13:22:18 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:21.797 [2024-07-15 13:22:18.391244] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.797 13:22:18 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:22.055 13:22:18 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:22.055 13:22:18 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:22.312 13:22:18 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:22.312 13:22:18 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:22.570 13:22:19 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:22.852 [2024-07-15 13:22:19.352516] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.852 13:22:19 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:23.112 13:22:19 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:23.112 13:22:19 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:23.112 13:22:19 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:23.112 13:22:19 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:24.043 Initializing NVMe Controllers 00:23:24.043 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:24.043 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:23:24.043 Initialization complete. Launching workers. 00:23:24.043 ======================================================== 00:23:24.043 Latency(us) 00:23:24.043 Device Information : IOPS MiB/s Average min max 00:23:24.043 PCIE (0000:00:10.0) NSID 1 from core 0: 24095.06 94.12 1328.06 335.61 5056.94 00:23:24.043 ======================================================== 00:23:24.043 Total : 24095.06 94.12 1328.06 335.61 5056.94 00:23:24.043 00:23:24.043 13:22:20 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:25.418 Initializing NVMe Controllers 00:23:25.418 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:25.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:25.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:25.418 Initialization complete. Launching workers. 00:23:25.418 ======================================================== 00:23:25.418 Latency(us) 00:23:25.418 Device Information : IOPS MiB/s Average min max 00:23:25.418 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3490.97 13.64 286.15 117.21 6169.77 00:23:25.418 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.50 0.48 8160.50 6048.31 12028.74 00:23:25.418 ======================================================== 00:23:25.418 Total : 3614.48 14.12 555.21 117.21 12028.74 00:23:25.418 00:23:25.418 13:22:22 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:26.790 Initializing NVMe Controllers 00:23:26.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:26.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:26.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:26.790 Initialization complete. Launching workers. 00:23:26.790 ======================================================== 00:23:26.790 Latency(us) 00:23:26.790 Device Information : IOPS MiB/s Average min max 00:23:26.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8627.63 33.70 3708.91 738.86 7607.12 00:23:26.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2666.34 10.42 12111.07 7570.83 23684.98 00:23:26.790 ======================================================== 00:23:26.790 Total : 11293.98 44.12 5692.54 738.86 23684.98 00:23:26.790 00:23:26.790 13:22:23 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:23:26.790 13:22:23 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:29.312 Initializing NVMe Controllers 00:23:29.312 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:29.312 Controller IO queue size 128, less than required. 00:23:29.312 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:29.312 Controller IO queue size 128, less than required. 00:23:29.312 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:29.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:29.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:29.312 Initialization complete. Launching workers. 00:23:29.312 ======================================================== 00:23:29.312 Latency(us) 00:23:29.312 Device Information : IOPS MiB/s Average min max 00:23:29.312 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1403.72 350.93 93330.43 54500.89 179190.89 00:23:29.312 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 554.10 138.52 236818.15 82421.47 363065.78 00:23:29.312 ======================================================== 00:23:29.312 Total : 1957.81 489.45 133940.17 54500.89 363065.78 00:23:29.312 00:23:29.312 13:22:25 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:29.570 Initializing NVMe Controllers 00:23:29.570 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:29.570 Controller IO queue size 128, less than required. 00:23:29.570 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:29.570 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:29.570 Controller IO queue size 128, less than required. 00:23:29.570 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:29.570 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:23:29.570 WARNING: Some requested NVMe devices were skipped 00:23:29.570 No valid NVMe controllers or AIO or URING devices found 00:23:29.570 13:22:26 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:32.099 Initializing NVMe Controllers 00:23:32.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:32.099 Controller IO queue size 128, less than required. 00:23:32.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:32.099 Controller IO queue size 128, less than required. 00:23:32.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:32.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:32.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:32.099 Initialization complete. Launching workers. 00:23:32.099 00:23:32.099 ==================== 00:23:32.099 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:32.099 TCP transport: 00:23:32.099 polls: 8103 00:23:32.099 idle_polls: 4400 00:23:32.099 sock_completions: 3703 00:23:32.099 nvme_completions: 4165 00:23:32.099 submitted_requests: 6266 00:23:32.099 queued_requests: 1 00:23:32.099 00:23:32.099 ==================== 00:23:32.099 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:32.099 TCP transport: 00:23:32.099 polls: 10594 00:23:32.099 idle_polls: 7231 00:23:32.099 sock_completions: 3363 00:23:32.099 nvme_completions: 6537 00:23:32.099 submitted_requests: 9792 00:23:32.099 queued_requests: 1 00:23:32.099 ======================================================== 00:23:32.099 Latency(us) 00:23:32.099 Device Information : IOPS MiB/s Average min max 00:23:32.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1040.89 260.22 127612.68 69953.42 201173.51 00:23:32.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1633.83 408.46 79522.00 42533.15 141594.12 00:23:32.099 ======================================================== 00:23:32.099 Total : 2674.72 668.68 98236.92 42533.15 201173.51 00:23:32.099 00:23:32.099 13:22:28 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:32.099 13:22:28 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:32.358 13:22:29 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:23:32.358 13:22:29 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:23:32.358 13:22:29 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:23:32.926 13:22:29 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=d24a3ca4-43f9-4459-9fbd-d93bc02b7adf 00:23:32.926 13:22:29 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb d24a3ca4-43f9-4459-9fbd-d93bc02b7adf 00:23:32.926 13:22:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=d24a3ca4-43f9-4459-9fbd-d93bc02b7adf 00:23:32.926 13:22:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:23:32.926 13:22:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:23:32.926 13:22:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:23:32.926 13:22:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:32.926 13:22:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:23:32.926 { 00:23:32.926 "base_bdev": "Nvme0n1", 00:23:32.926 "block_size": 4096, 00:23:32.926 "cluster_size": 4194304, 00:23:32.926 "free_clusters": 1278, 00:23:32.926 "name": "lvs_0", 00:23:32.926 "total_data_clusters": 1278, 00:23:32.926 "uuid": "d24a3ca4-43f9-4459-9fbd-d93bc02b7adf" 00:23:32.926 } 00:23:32.926 ]' 00:23:32.926 13:22:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="d24a3ca4-43f9-4459-9fbd-d93bc02b7adf") .free_clusters' 00:23:33.184 13:22:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=1278 00:23:33.185 13:22:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="d24a3ca4-43f9-4459-9fbd-d93bc02b7adf") .cluster_size' 00:23:33.185 13:22:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:23:33.185 5112 00:23:33.185 13:22:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=5112 00:23:33.185 13:22:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 5112 00:23:33.185 13:22:29 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:23:33.185 13:22:29 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d24a3ca4-43f9-4459-9fbd-d93bc02b7adf lbd_0 5112 00:23:33.443 13:22:30 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=c81e3110-53d5-4a7c-8b7b-fd580de38889 00:23:33.443 13:22:30 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore c81e3110-53d5-4a7c-8b7b-fd580de38889 lvs_n_0 00:23:33.701 13:22:30 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=ccc02dab-a795-4eef-bf9e-34dadf865e6e 00:23:33.701 13:22:30 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb ccc02dab-a795-4eef-bf9e-34dadf865e6e 00:23:33.701 13:22:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=ccc02dab-a795-4eef-bf9e-34dadf865e6e 00:23:33.701 13:22:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:23:33.701 13:22:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:23:33.701 13:22:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:23:33.701 13:22:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:33.959 13:22:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:23:33.959 { 00:23:33.959 "base_bdev": "Nvme0n1", 00:23:33.959 "block_size": 4096, 00:23:33.959 "cluster_size": 4194304, 00:23:33.959 "free_clusters": 0, 00:23:33.959 "name": "lvs_0", 00:23:33.959 "total_data_clusters": 1278, 00:23:33.959 "uuid": "d24a3ca4-43f9-4459-9fbd-d93bc02b7adf" 00:23:33.959 }, 00:23:33.959 { 00:23:33.959 "base_bdev": "c81e3110-53d5-4a7c-8b7b-fd580de38889", 00:23:33.959 "block_size": 4096, 00:23:33.959 "cluster_size": 4194304, 00:23:33.959 "free_clusters": 1276, 00:23:33.959 "name": "lvs_n_0", 00:23:33.959 "total_data_clusters": 1276, 00:23:33.959 "uuid": "ccc02dab-a795-4eef-bf9e-34dadf865e6e" 00:23:33.959 } 00:23:33.959 ]' 00:23:33.959 13:22:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="ccc02dab-a795-4eef-bf9e-34dadf865e6e") .free_clusters' 00:23:33.959 13:22:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=1276 00:23:33.959 13:22:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="ccc02dab-a795-4eef-bf9e-34dadf865e6e") .cluster_size' 00:23:34.216 13:22:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:23:34.216 13:22:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=5104 00:23:34.217 5104 00:23:34.217 13:22:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 5104 00:23:34.217 13:22:30 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:23:34.217 13:22:30 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ccc02dab-a795-4eef-bf9e-34dadf865e6e lbd_nest_0 5104 00:23:34.217 13:22:30 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=a4b8b622-25f7-47bf-b8fc-3881e0927806 00:23:34.217 13:22:30 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:34.473 13:22:31 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:23:34.473 13:22:31 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 a4b8b622-25f7-47bf-b8fc-3881e0927806 00:23:34.731 13:22:31 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:34.989 13:22:31 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:23:34.989 13:22:31 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:23:34.989 13:22:31 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:34.989 13:22:31 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:34.989 13:22:31 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:35.555 Initializing NVMe Controllers 00:23:35.555 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:35.555 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:35.555 WARNING: Some requested NVMe devices were skipped 00:23:35.555 No valid NVMe controllers or AIO or URING devices found 00:23:35.555 13:22:32 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:35.555 13:22:32 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:45.568 Initializing NVMe Controllers 00:23:45.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:45.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:45.568 Initialization complete. Launching workers. 00:23:45.568 ======================================================== 00:23:45.568 Latency(us) 00:23:45.568 Device Information : IOPS MiB/s Average min max 00:23:45.568 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 987.90 123.49 1011.45 361.33 6647.99 00:23:45.568 ======================================================== 00:23:45.568 Total : 987.90 123.49 1011.45 361.33 6647.99 00:23:45.568 00:23:45.568 13:22:42 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:45.568 13:22:42 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:45.568 13:22:42 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:45.826 Initializing NVMe Controllers 00:23:45.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:45.826 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:45.826 WARNING: Some requested NVMe devices were skipped 00:23:45.826 No valid NVMe controllers or AIO or URING devices found 00:23:46.083 13:22:42 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:46.083 13:22:42 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:58.278 Initializing NVMe Controllers 00:23:58.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:58.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:58.278 Initialization complete. Launching workers. 00:23:58.278 ======================================================== 00:23:58.278 Latency(us) 00:23:58.278 Device Information : IOPS MiB/s Average min max 00:23:58.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 916.12 114.51 34964.99 7868.75 286624.36 00:23:58.278 ======================================================== 00:23:58.278 Total : 916.12 114.51 34964.99 7868.75 286624.36 00:23:58.278 00:23:58.278 13:22:52 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:58.278 13:22:52 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:58.278 13:22:52 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:58.278 Initializing NVMe Controllers 00:23:58.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:58.278 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:58.278 WARNING: Some requested NVMe devices were skipped 00:23:58.278 No valid NVMe controllers or AIO or URING devices found 00:23:58.278 13:22:53 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:58.278 13:22:53 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:08.243 Initializing NVMe Controllers 00:24:08.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:08.243 Controller IO queue size 128, less than required. 00:24:08.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:08.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:08.243 Initialization complete. Launching workers. 00:24:08.243 ======================================================== 00:24:08.243 Latency(us) 00:24:08.243 Device Information : IOPS MiB/s Average min max 00:24:08.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3581.97 447.75 35789.25 11073.24 88998.04 00:24:08.243 ======================================================== 00:24:08.243 Total : 3581.97 447.75 35789.25 11073.24 88998.04 00:24:08.243 00:24:08.243 13:23:03 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:08.243 13:23:03 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a4b8b622-25f7-47bf-b8fc-3881e0927806 00:24:08.243 13:23:04 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:24:08.243 13:23:04 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c81e3110-53d5-4a7c-8b7b-fd580de38889 00:24:08.243 13:23:04 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:08.502 rmmod nvme_tcp 00:24:08.502 rmmod nvme_fabrics 00:24:08.502 rmmod nvme_keyring 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 104646 ']' 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 104646 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 104646 ']' 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 104646 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 104646 00:24:08.502 killing process with pid 104646 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 104646' 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 104646 00:24:08.502 13:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 104646 00:24:09.874 13:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:09.874 13:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:09.874 13:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:09.874 13:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:09.874 13:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:09.874 13:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.874 13:23:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:09.874 13:23:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.131 13:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:10.132 00:24:10.132 real 0m51.205s 00:24:10.132 user 3m13.540s 00:24:10.132 sys 0m11.132s 00:24:10.132 13:23:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:10.132 13:23:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:10.132 ************************************ 00:24:10.132 END TEST nvmf_perf 00:24:10.132 ************************************ 00:24:10.132 13:23:06 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:10.132 13:23:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:10.132 13:23:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:10.132 13:23:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:10.132 ************************************ 00:24:10.132 START TEST nvmf_fio_host 00:24:10.132 ************************************ 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:10.132 * Looking for test storage... 00:24:10.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:10.132 Cannot find device "nvmf_tgt_br" 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:10.132 Cannot find device "nvmf_tgt_br2" 00:24:10.132 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:24:10.133 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:10.133 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:10.133 Cannot find device "nvmf_tgt_br" 00:24:10.133 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:24:10.133 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:10.133 Cannot find device "nvmf_tgt_br2" 00:24:10.133 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:24:10.133 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:10.390 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:10.390 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:10.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:10.390 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:24:10.390 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:10.391 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:10.391 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:24:10.391 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:10.391 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:10.391 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:10.391 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:10.391 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:10.391 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:10.391 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:10.391 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:10.391 13:23:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:10.391 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:10.391 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:10.391 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:10.391 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:10.391 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:10.391 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:10.391 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:10.391 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:10.391 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:10.391 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:10.391 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:10.391 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:10.391 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:10.391 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:10.391 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:10.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:24:10.391 00:24:10.391 --- 10.0.0.2 ping statistics --- 00:24:10.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.391 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:24:10.391 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:10.391 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:10.391 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:24:10.391 00:24:10.391 --- 10.0.0.3 ping statistics --- 00:24:10.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.391 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:24:10.391 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:10.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:24:10.391 00:24:10.391 --- 10.0.0.1 ping statistics --- 00:24:10.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.391 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=105593 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 105593 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 105593 ']' 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:10.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:10.648 13:23:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.648 [2024-07-15 13:23:07.217861] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:10.648 [2024-07-15 13:23:07.217965] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.648 [2024-07-15 13:23:07.357241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:10.905 [2024-07-15 13:23:07.461690] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.905 [2024-07-15 13:23:07.461751] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.905 [2024-07-15 13:23:07.461766] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.905 [2024-07-15 13:23:07.461776] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.905 [2024-07-15 13:23:07.461786] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.905 [2024-07-15 13:23:07.461931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.905 [2024-07-15 13:23:07.462218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.905 [2024-07-15 13:23:07.463254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:10.905 [2024-07-15 13:23:07.463272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.836 13:23:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:11.836 13:23:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:24:11.836 13:23:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:11.836 [2024-07-15 13:23:08.562322] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.094 13:23:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:12.094 13:23:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:12.094 13:23:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.094 13:23:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:12.390 Malloc1 00:24:12.390 13:23:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:12.647 13:23:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:12.914 13:23:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.171 [2024-07-15 13:23:09.836354] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.171 13:23:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:13.428 13:23:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:24:13.428 13:23:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:13.428 13:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:13.428 13:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:13.428 13:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:13.428 13:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:13.428 13:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:13.428 13:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:24:13.428 13:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:13.428 13:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:13.428 13:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:13.428 13:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:24:13.428 13:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:13.685 13:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:13.685 13:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:13.685 13:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:13.685 13:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:13.685 13:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:13.685 13:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:13.685 13:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:13.685 13:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:13.685 13:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:13.685 13:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:13.685 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:13.685 fio-3.35 00:24:13.685 Starting 1 thread 00:24:16.210 00:24:16.210 test: (groupid=0, jobs=1): err= 0: pid=105729: Mon Jul 15 13:23:12 2024 00:24:16.210 read: IOPS=8900, BW=34.8MiB/s (36.5MB/s)(69.8MiB/2007msec) 00:24:16.210 slat (usec): min=2, max=212, avg= 2.58, stdev= 2.21 00:24:16.210 clat (usec): min=2438, max=13500, avg=7475.75, stdev=506.52 00:24:16.210 lat (usec): min=2457, max=13503, avg=7478.33, stdev=506.35 00:24:16.210 clat percentiles (usec): 00:24:16.210 | 1.00th=[ 6456], 5.00th=[ 6783], 10.00th=[ 6915], 20.00th=[ 7111], 00:24:16.210 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7439], 60.00th=[ 7570], 00:24:16.210 | 70.00th=[ 7701], 80.00th=[ 7832], 90.00th=[ 8029], 95.00th=[ 8225], 00:24:16.210 | 99.00th=[ 8586], 99.50th=[ 8848], 99.90th=[11731], 99.95th=[12649], 00:24:16.210 | 99.99th=[13435] 00:24:16.210 bw ( KiB/s): min=34552, max=36256, per=99.99%, avg=35600.00, stdev=736.84, samples=4 00:24:16.210 iops : min= 8638, max= 9064, avg=8900.00, stdev=184.21, samples=4 00:24:16.210 write: IOPS=8914, BW=34.8MiB/s (36.5MB/s)(69.9MiB/2007msec); 0 zone resets 00:24:16.210 slat (usec): min=2, max=149, avg= 2.71, stdev= 1.45 00:24:16.210 clat (usec): min=1509, max=12984, avg=6812.89, stdev=470.11 00:24:16.210 lat (usec): min=1519, max=12986, avg=6815.60, stdev=470.05 00:24:16.210 clat percentiles (usec): 00:24:16.210 | 1.00th=[ 5866], 5.00th=[ 6194], 10.00th=[ 6325], 20.00th=[ 6521], 00:24:16.210 | 30.00th=[ 6652], 40.00th=[ 6718], 50.00th=[ 6783], 60.00th=[ 6915], 00:24:16.210 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7439], 00:24:16.210 | 99.00th=[ 7767], 99.50th=[ 7963], 99.90th=[10945], 99.95th=[11994], 00:24:16.210 | 99.99th=[12911] 00:24:16.210 bw ( KiB/s): min=35352, max=36048, per=100.00%, avg=35658.00, stdev=289.43, samples=4 00:24:16.210 iops : min= 8838, max= 9012, avg=8914.50, stdev=72.36, samples=4 00:24:16.210 lat (msec) : 2=0.03%, 4=0.13%, 10=99.68%, 20=0.16% 00:24:16.210 cpu : usr=67.30%, sys=23.73%, ctx=6, majf=0, minf=6 00:24:16.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:16.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:16.210 issued rwts: total=17864,17892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.210 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:16.210 00:24:16.210 Run status group 0 (all jobs): 00:24:16.210 READ: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=69.8MiB (73.2MB), run=2007-2007msec 00:24:16.210 WRITE: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=69.9MiB (73.3MB), run=2007-2007msec 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:16.211 13:23:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:16.211 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:16.211 fio-3.35 00:24:16.211 Starting 1 thread 00:24:18.734 00:24:18.735 test: (groupid=0, jobs=1): err= 0: pid=105772: Mon Jul 15 13:23:15 2024 00:24:18.735 read: IOPS=7751, BW=121MiB/s (127MB/s)(243MiB/2007msec) 00:24:18.735 slat (usec): min=3, max=125, avg= 3.95, stdev= 2.02 00:24:18.735 clat (usec): min=2233, max=21313, avg=9781.11, stdev=2515.06 00:24:18.735 lat (usec): min=2237, max=21319, avg=9785.06, stdev=2515.38 00:24:18.735 clat percentiles (usec): 00:24:18.735 | 1.00th=[ 4883], 5.00th=[ 6063], 10.00th=[ 6718], 20.00th=[ 7504], 00:24:18.735 | 30.00th=[ 8291], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10421], 00:24:18.735 | 70.00th=[11076], 80.00th=[11600], 90.00th=[12911], 95.00th=[14091], 00:24:18.735 | 99.00th=[16712], 99.50th=[17695], 99.90th=[20579], 99.95th=[21103], 00:24:18.735 | 99.99th=[21365] 00:24:18.735 bw ( KiB/s): min=55072, max=73024, per=50.95%, avg=63192.00, stdev=7396.89, samples=4 00:24:18.735 iops : min= 3442, max= 4564, avg=3949.50, stdev=462.31, samples=4 00:24:18.735 write: IOPS=4605, BW=72.0MiB/s (75.5MB/s)(130MiB/1802msec); 0 zone resets 00:24:18.735 slat (usec): min=36, max=571, avg=40.58, stdev=10.03 00:24:18.735 clat (usec): min=2883, max=22218, avg=11730.71, stdev=2339.17 00:24:18.735 lat (usec): min=2920, max=22270, avg=11771.29, stdev=2342.58 00:24:18.735 clat percentiles (usec): 00:24:18.735 | 1.00th=[ 7570], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9765], 00:24:18.735 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:24:18.735 | 70.00th=[12780], 80.00th=[13566], 90.00th=[15008], 95.00th=[16057], 00:24:18.735 | 99.00th=[18482], 99.50th=[19268], 99.90th=[21627], 99.95th=[21890], 00:24:18.735 | 99.99th=[22152] 00:24:18.735 bw ( KiB/s): min=57280, max=75072, per=89.22%, avg=65752.00, stdev=7280.98, samples=4 00:24:18.735 iops : min= 3580, max= 4692, avg=4109.50, stdev=455.06, samples=4 00:24:18.735 lat (msec) : 4=0.18%, 10=43.68%, 20=55.86%, 50=0.28% 00:24:18.735 cpu : usr=72.98%, sys=17.20%, ctx=7, majf=0, minf=2 00:24:18.735 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:18.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:18.735 issued rwts: total=15558,8300,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:18.735 00:24:18.735 Run status group 0 (all jobs): 00:24:18.735 READ: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=243MiB (255MB), run=2007-2007msec 00:24:18.735 WRITE: bw=72.0MiB/s (75.5MB/s), 72.0MiB/s-72.0MiB/s (75.5MB/s-75.5MB/s), io=130MiB (136MB), run=1802-1802msec 00:24:18.735 13:23:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:18.735 13:23:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:24:18.735 13:23:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:24:18.735 13:23:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:24:18.735 13:23:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:24:18.735 13:23:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:24:18.735 13:23:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:18.735 13:23:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:18.735 13:23:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:24:18.735 13:23:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:24:18.735 13:23:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:18.735 13:23:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:24:19.299 Nvme0n1 00:24:19.299 13:23:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:24:19.556 13:23:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=853fae54-f79e-409d-9f32-24ac8b496b15 00:24:19.556 13:23:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 853fae54-f79e-409d-9f32-24ac8b496b15 00:24:19.556 13:23:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=853fae54-f79e-409d-9f32-24ac8b496b15 00:24:19.556 13:23:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:24:19.556 13:23:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:24:19.556 13:23:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:24:19.557 13:23:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:19.814 13:23:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:24:19.814 { 00:24:19.814 "base_bdev": "Nvme0n1", 00:24:19.814 "block_size": 4096, 00:24:19.814 "cluster_size": 1073741824, 00:24:19.814 "free_clusters": 4, 00:24:19.814 "name": "lvs_0", 00:24:19.814 "total_data_clusters": 4, 00:24:19.814 "uuid": "853fae54-f79e-409d-9f32-24ac8b496b15" 00:24:19.814 } 00:24:19.814 ]' 00:24:19.814 13:23:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="853fae54-f79e-409d-9f32-24ac8b496b15") .free_clusters' 00:24:19.814 13:23:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=4 00:24:19.814 13:23:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="853fae54-f79e-409d-9f32-24ac8b496b15") .cluster_size' 00:24:19.814 13:23:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:24:19.814 4096 00:24:19.814 13:23:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=4096 00:24:19.814 13:23:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 4096 00:24:19.814 13:23:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:24:20.072 4f03df7f-a0ca-468b-816f-f5a9df0d66ea 00:24:20.072 13:23:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:24:20.329 13:23:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:24:20.587 13:23:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:20.844 13:23:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:21.147 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:21.147 fio-3.35 00:24:21.147 Starting 1 thread 00:24:23.688 00:24:23.688 test: (groupid=0, jobs=1): err= 0: pid=105926: Mon Jul 15 13:23:19 2024 00:24:23.688 read: IOPS=6002, BW=23.4MiB/s (24.6MB/s)(47.1MiB/2009msec) 00:24:23.688 slat (usec): min=2, max=286, avg= 2.67, stdev= 3.62 00:24:23.688 clat (usec): min=4325, max=19604, avg=11192.34, stdev=1093.83 00:24:23.688 lat (usec): min=4332, max=19606, avg=11195.01, stdev=1093.60 00:24:23.688 clat percentiles (usec): 00:24:23.688 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10290], 00:24:23.688 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:24:23.688 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12518], 95.00th=[12911], 00:24:23.688 | 99.00th=[14746], 99.50th=[15270], 99.90th=[16712], 99.95th=[16909], 00:24:23.688 | 99.99th=[19530] 00:24:23.688 bw ( KiB/s): min=22251, max=24704, per=99.88%, avg=23982.75, stdev=1159.44, samples=4 00:24:23.688 iops : min= 5562, max= 6176, avg=5995.50, stdev=290.23, samples=4 00:24:23.688 write: IOPS=5990, BW=23.4MiB/s (24.5MB/s)(47.0MiB/2009msec); 0 zone resets 00:24:23.688 slat (usec): min=2, max=211, avg= 2.77, stdev= 2.18 00:24:23.688 clat (usec): min=2034, max=17033, avg=10061.22, stdev=997.97 00:24:23.688 lat (usec): min=2044, max=17036, avg=10063.99, stdev=997.83 00:24:23.688 clat percentiles (usec): 00:24:23.688 | 1.00th=[ 8160], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:24:23.688 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:24:23.688 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11207], 95.00th=[11600], 00:24:23.688 | 99.00th=[13173], 99.50th=[13566], 99.90th=[15139], 99.95th=[16581], 00:24:23.688 | 99.99th=[16909] 00:24:23.688 bw ( KiB/s): min=23145, max=24320, per=99.87%, avg=23930.25, stdev=541.45, samples=4 00:24:23.688 iops : min= 5786, max= 6080, avg=5982.50, stdev=135.48, samples=4 00:24:23.688 lat (msec) : 4=0.05%, 10=30.10%, 20=69.86% 00:24:23.688 cpu : usr=70.92%, sys=22.26%, ctx=5, majf=0, minf=6 00:24:23.688 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:24:23.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:23.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:23.688 issued rwts: total=12060,12035,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:23.688 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:23.688 00:24:23.688 Run status group 0 (all jobs): 00:24:23.688 READ: bw=23.4MiB/s (24.6MB/s), 23.4MiB/s-23.4MiB/s (24.6MB/s-24.6MB/s), io=47.1MiB (49.4MB), run=2009-2009msec 00:24:23.688 WRITE: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=47.0MiB (49.3MB), run=2009-2009msec 00:24:23.688 13:23:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:23.688 13:23:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:24:23.949 13:23:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=709391de-1c78-4cc2-ad2c-e44649c1ea16 00:24:23.949 13:23:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 709391de-1c78-4cc2-ad2c-e44649c1ea16 00:24:23.949 13:23:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=709391de-1c78-4cc2-ad2c-e44649c1ea16 00:24:23.949 13:23:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:24:23.949 13:23:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:24:23.949 13:23:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:24:23.949 13:23:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:24.207 13:23:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:24:24.207 { 00:24:24.207 "base_bdev": "Nvme0n1", 00:24:24.207 "block_size": 4096, 00:24:24.207 "cluster_size": 1073741824, 00:24:24.207 "free_clusters": 0, 00:24:24.207 "name": "lvs_0", 00:24:24.207 "total_data_clusters": 4, 00:24:24.207 "uuid": "853fae54-f79e-409d-9f32-24ac8b496b15" 00:24:24.207 }, 00:24:24.207 { 00:24:24.207 "base_bdev": "4f03df7f-a0ca-468b-816f-f5a9df0d66ea", 00:24:24.207 "block_size": 4096, 00:24:24.207 "cluster_size": 4194304, 00:24:24.207 "free_clusters": 1022, 00:24:24.207 "name": "lvs_n_0", 00:24:24.207 "total_data_clusters": 1022, 00:24:24.207 "uuid": "709391de-1c78-4cc2-ad2c-e44649c1ea16" 00:24:24.207 } 00:24:24.207 ]' 00:24:24.207 13:23:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="709391de-1c78-4cc2-ad2c-e44649c1ea16") .free_clusters' 00:24:24.207 13:23:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=1022 00:24:24.207 13:23:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="709391de-1c78-4cc2-ad2c-e44649c1ea16") .cluster_size' 00:24:24.207 13:23:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:24:24.207 13:23:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=4088 00:24:24.207 4088 00:24:24.207 13:23:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 4088 00:24:24.207 13:23:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:24:24.464 ba8d79b4-c518-4258-9a75-9239b02a8806 00:24:24.464 13:23:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:24:24.721 13:23:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:24:24.978 13:23:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:25.235 13:23:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:25.235 13:23:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:25.235 13:23:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:25.235 13:23:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:25.235 13:23:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:25.236 13:23:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:25.236 13:23:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:24:25.236 13:23:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:25.236 13:23:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:25.236 13:23:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:25.236 13:23:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:24:25.236 13:23:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:25.236 13:23:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:25.236 13:23:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:25.236 13:23:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:25.236 13:23:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:25.236 13:23:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:25.236 13:23:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:25.236 13:23:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:25.236 13:23:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:25.236 13:23:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:25.236 13:23:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:25.493 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:25.493 fio-3.35 00:24:25.493 Starting 1 thread 00:24:28.019 00:24:28.019 test: (groupid=0, jobs=1): err= 0: pid=106046: Mon Jul 15 13:23:24 2024 00:24:28.019 read: IOPS=5651, BW=22.1MiB/s (23.1MB/s)(44.4MiB/2010msec) 00:24:28.019 slat (usec): min=2, max=631, avg= 2.86, stdev= 6.80 00:24:28.019 clat (usec): min=4576, max=20665, avg=11963.14, stdev=1165.00 00:24:28.019 lat (usec): min=4585, max=20668, avg=11966.00, stdev=1164.74 00:24:28.019 clat percentiles (usec): 00:24:28.019 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10683], 20.00th=[11076], 00:24:28.019 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:24:28.019 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13435], 95.00th=[13829], 00:24:28.019 | 99.00th=[15139], 99.50th=[15664], 99.90th=[18220], 99.95th=[19530], 00:24:28.019 | 99.99th=[20579] 00:24:28.019 bw ( KiB/s): min=21952, max=23376, per=99.92%, avg=22586.00, stdev=594.85, samples=4 00:24:28.019 iops : min= 5488, max= 5844, avg=5646.50, stdev=148.71, samples=4 00:24:28.019 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(44.1MiB/2010msec); 0 zone resets 00:24:28.019 slat (usec): min=2, max=143, avg= 2.93, stdev= 1.91 00:24:28.019 clat (usec): min=2282, max=19364, avg=10660.20, stdev=1057.73 00:24:28.019 lat (usec): min=2294, max=19367, avg=10663.13, stdev=1057.54 00:24:28.019 clat percentiles (usec): 00:24:28.019 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:24:28.019 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:24:28.019 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:24:28.019 | 99.00th=[13435], 99.50th=[14091], 99.90th=[17957], 99.95th=[19006], 00:24:28.019 | 99.99th=[19268] 00:24:28.019 bw ( KiB/s): min=21632, max=22936, per=99.96%, avg=22454.00, stdev=570.43, samples=4 00:24:28.019 iops : min= 5408, max= 5734, avg=5613.50, stdev=142.61, samples=4 00:24:28.019 lat (msec) : 4=0.05%, 10=13.11%, 20=86.83%, 50=0.01% 00:24:28.019 cpu : usr=70.83%, sys=22.45%, ctx=554, majf=0, minf=6 00:24:28.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:24:28.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:28.019 issued rwts: total=11359,11288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:28.019 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:28.019 00:24:28.019 Run status group 0 (all jobs): 00:24:28.019 READ: bw=22.1MiB/s (23.1MB/s), 22.1MiB/s-22.1MiB/s (23.1MB/s-23.1MB/s), io=44.4MiB (46.5MB), run=2010-2010msec 00:24:28.019 WRITE: bw=21.9MiB/s (23.0MB/s), 21.9MiB/s-21.9MiB/s (23.0MB/s-23.0MB/s), io=44.1MiB (46.2MB), run=2010-2010msec 00:24:28.019 13:23:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:28.019 13:23:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:24:28.019 13:23:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:24:28.277 13:23:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:24:28.535 13:23:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:24:28.793 13:23:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:24:29.050 13:23:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:24:29.308 13:23:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:29.308 13:23:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:29.308 13:23:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:29.308 13:23:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:29.308 13:23:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:29.308 13:23:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:29.308 13:23:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:29.308 13:23:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:29.308 13:23:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:29.308 rmmod nvme_tcp 00:24:29.565 rmmod nvme_fabrics 00:24:29.565 rmmod nvme_keyring 00:24:29.565 13:23:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:29.565 13:23:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:29.565 13:23:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:29.565 13:23:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 105593 ']' 00:24:29.565 13:23:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 105593 00:24:29.565 13:23:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 105593 ']' 00:24:29.565 13:23:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 105593 00:24:29.565 13:23:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:24:29.566 13:23:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:29.566 13:23:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 105593 00:24:29.566 13:23:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:29.566 13:23:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:29.566 killing process with pid 105593 00:24:29.566 13:23:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 105593' 00:24:29.566 13:23:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 105593 00:24:29.566 13:23:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 105593 00:24:29.824 13:23:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:29.824 13:23:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:29.824 13:23:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:29.824 13:23:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:29.824 13:23:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:29.824 13:23:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.824 13:23:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:29.824 13:23:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.824 13:23:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:29.824 ************************************ 00:24:29.824 END TEST nvmf_fio_host 00:24:29.824 ************************************ 00:24:29.824 00:24:29.824 real 0m19.715s 00:24:29.824 user 1m27.081s 00:24:29.824 sys 0m4.460s 00:24:29.824 13:23:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:29.824 13:23:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.824 13:23:26 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:29.824 13:23:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:29.824 13:23:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:29.824 13:23:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:29.824 ************************************ 00:24:29.824 START TEST nvmf_failover 00:24:29.824 ************************************ 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:29.824 * Looking for test storage... 00:24:29.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.824 13:23:26 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:29.825 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:30.083 Cannot find device "nvmf_tgt_br" 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:30.083 Cannot find device "nvmf_tgt_br2" 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:30.083 Cannot find device "nvmf_tgt_br" 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:30.083 Cannot find device "nvmf_tgt_br2" 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:30.083 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:30.083 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:30.083 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:30.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:30.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:24:30.341 00:24:30.341 --- 10.0.0.2 ping statistics --- 00:24:30.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.341 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:30.341 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:30.341 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:24:30.341 00:24:30.341 --- 10.0.0.3 ping statistics --- 00:24:30.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.341 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:30.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:30.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:24:30.341 00:24:30.341 --- 10.0.0.1 ping statistics --- 00:24:30.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.341 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=106315 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 106315 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 106315 ']' 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:30.341 13:23:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.342 13:23:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:30.342 13:23:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:30.342 [2024-07-15 13:23:26.971990] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:30.342 [2024-07-15 13:23:26.972097] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.599 [2024-07-15 13:23:27.112454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:30.599 [2024-07-15 13:23:27.215882] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.599 [2024-07-15 13:23:27.215957] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.599 [2024-07-15 13:23:27.215971] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.599 [2024-07-15 13:23:27.215982] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.599 [2024-07-15 13:23:27.215991] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.599 [2024-07-15 13:23:27.216141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:30.599 [2024-07-15 13:23:27.216288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:30.599 [2024-07-15 13:23:27.216296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.530 13:23:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:31.530 13:23:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:31.530 13:23:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:31.530 13:23:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:31.530 13:23:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:31.530 13:23:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.530 13:23:27 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:31.530 [2024-07-15 13:23:28.237763] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.530 13:23:28 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:31.787 Malloc0 00:24:32.045 13:23:28 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:32.302 13:23:28 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:32.559 13:23:29 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.816 [2024-07-15 13:23:29.350661] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.816 13:23:29 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:33.073 [2024-07-15 13:23:29.614865] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:33.074 13:23:29 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:33.331 [2024-07-15 13:23:29.927137] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:33.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:33.331 13:23:29 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=106431 00:24:33.331 13:23:29 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:33.331 13:23:29 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:33.331 13:23:29 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 106431 /var/tmp/bdevperf.sock 00:24:33.331 13:23:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 106431 ']' 00:24:33.331 13:23:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:33.331 13:23:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:33.331 13:23:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:33.331 13:23:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:33.331 13:23:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:33.895 13:23:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:33.895 13:23:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:33.895 13:23:30 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:34.153 NVMe0n1 00:24:34.153 13:23:30 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:34.409 00:24:34.409 13:23:31 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:34.409 13:23:31 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=106461 00:24:34.409 13:23:31 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:35.341 13:23:32 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.599 [2024-07-15 13:23:32.329050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.599 [2024-07-15 13:23:32.329639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1800eb0 is same with the state(5) to be set 00:24:35.857 13:23:32 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:39.210 13:23:35 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:39.210 00:24:39.210 13:23:35 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:39.466 [2024-07-15 13:23:36.046187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046252] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.466 [2024-07-15 13:23:36.046539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.467 [2024-07-15 13:23:36.046548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.467 [2024-07-15 13:23:36.046557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.467 [2024-07-15 13:23:36.046565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.467 [2024-07-15 13:23:36.046574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18016e0 is same with the state(5) to be set 00:24:39.467 13:23:36 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:42.743 13:23:39 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:42.743 [2024-07-15 13:23:39.350300] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.743 13:23:39 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:43.675 13:23:40 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:44.242 13:23:40 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 106461 00:24:49.506 0 00:24:49.506 13:23:46 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 106431 00:24:49.506 13:23:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 106431 ']' 00:24:49.506 13:23:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 106431 00:24:49.506 13:23:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:24:49.506 13:23:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:49.506 13:23:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 106431 00:24:49.506 killing process with pid 106431 00:24:49.506 13:23:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:49.506 13:23:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:49.506 13:23:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 106431' 00:24:49.506 13:23:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 106431 00:24:49.506 13:23:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 106431 00:24:49.775 13:23:46 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:49.775 [2024-07-15 13:23:29.993953] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:49.775 [2024-07-15 13:23:29.994070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106431 ] 00:24:49.775 [2024-07-15 13:23:30.126815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.775 [2024-07-15 13:23:30.231815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.775 Running I/O for 15 seconds... 00:24:49.775 [2024-07-15 13:23:32.331483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.775 [2024-07-15 13:23:32.331529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.331563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.331582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.331598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.331613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.331628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.331642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.331657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.331671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.331686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.331700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.331715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.331728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.331744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.331757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.331772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.331786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.331801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.331815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.331830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.331844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.331887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.331903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.331918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.331931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.331946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.331960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.331983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.331998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.332013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.332026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.332041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.332056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.332071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.332085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.332099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.332113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.332128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.332142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.332157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.332171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.332186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.332199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.332228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.332243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.332258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.332279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.332295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.332309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.332324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.332338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.332353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.332366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.332381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.332395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.332410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.332423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.332438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.332452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.332473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.332486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.332501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.332521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.332536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.332550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.775 [2024-07-15 13:23:32.332565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.775 [2024-07-15 13:23:32.332578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.332593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.332607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.332622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.332635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.332650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.332670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.332686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.332700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.332715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.332728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.332743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.332757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.332772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.332785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.332800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.332814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.332829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.332842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.332857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.332871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.332886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.332899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.332914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.332928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.332948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.332961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.332976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.332990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.333020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.333054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.333083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.333111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.333139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.333168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.333196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.333244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.776 [2024-07-15 13:23:32.333274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.776 [2024-07-15 13:23:32.333302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.776 [2024-07-15 13:23:32.333330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.776 [2024-07-15 13:23:32.333359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.776 [2024-07-15 13:23:32.333389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.776 [2024-07-15 13:23:32.333424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.776 [2024-07-15 13:23:32.333459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.776 [2024-07-15 13:23:32.333488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.776 [2024-07-15 13:23:32.333517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.776 [2024-07-15 13:23:32.333546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.333575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.333603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.776 [2024-07-15 13:23:32.333632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.776 [2024-07-15 13:23:32.333647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.333661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.333676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.333689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.333704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.333718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.333733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.333747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.333761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.333775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.333795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.333810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.333825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.333838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.333853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.333867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.333882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.333895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.333914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.333929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.333943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.333957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.333972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.333985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.777 [2024-07-15 13:23:32.334745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.777 [2024-07-15 13:23:32.334760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.778 [2024-07-15 13:23:32.334774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.334789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.778 [2024-07-15 13:23:32.334803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.334818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.778 [2024-07-15 13:23:32.334832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.334846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.778 [2024-07-15 13:23:32.334860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.334875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.778 [2024-07-15 13:23:32.334888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.334909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.778 [2024-07-15 13:23:32.334923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.334938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.778 [2024-07-15 13:23:32.334952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.334974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.778 [2024-07-15 13:23:32.334988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.335003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.778 [2024-07-15 13:23:32.335022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.335056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.778 [2024-07-15 13:23:32.335071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85296 len:8 PRP1 0x0 PRP2 0x0 00:24:49.778 [2024-07-15 13:23:32.335085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.335103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.778 [2024-07-15 13:23:32.335114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.778 [2024-07-15 13:23:32.335125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85304 len:8 PRP1 0x0 PRP2 0x0 00:24:49.778 [2024-07-15 13:23:32.335138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.335151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.778 [2024-07-15 13:23:32.335162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.778 [2024-07-15 13:23:32.335172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85312 len:8 PRP1 0x0 PRP2 0x0 00:24:49.778 [2024-07-15 13:23:32.335185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.335198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.778 [2024-07-15 13:23:32.335220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.778 [2024-07-15 13:23:32.335232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85320 len:8 PRP1 0x0 PRP2 0x0 00:24:49.778 [2024-07-15 13:23:32.335245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.335259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.778 [2024-07-15 13:23:32.335268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.778 [2024-07-15 13:23:32.335278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85328 len:8 PRP1 0x0 PRP2 0x0 00:24:49.778 [2024-07-15 13:23:32.335291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.335305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.778 [2024-07-15 13:23:32.335314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.778 [2024-07-15 13:23:32.335324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85336 len:8 PRP1 0x0 PRP2 0x0 00:24:49.778 [2024-07-15 13:23:32.335337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.335350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.778 [2024-07-15 13:23:32.335365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.778 [2024-07-15 13:23:32.335375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85344 len:8 PRP1 0x0 PRP2 0x0 00:24:49.778 [2024-07-15 13:23:32.335396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.335410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.778 [2024-07-15 13:23:32.335420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.778 [2024-07-15 13:23:32.335430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85352 len:8 PRP1 0x0 PRP2 0x0 00:24:49.778 [2024-07-15 13:23:32.335443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.335461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.778 [2024-07-15 13:23:32.335471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.778 [2024-07-15 13:23:32.335481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85360 len:8 PRP1 0x0 PRP2 0x0 00:24:49.778 [2024-07-15 13:23:32.335494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.335507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.778 [2024-07-15 13:23:32.335517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.778 [2024-07-15 13:23:32.335527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85368 len:8 PRP1 0x0 PRP2 0x0 00:24:49.778 [2024-07-15 13:23:32.335540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.335553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.778 [2024-07-15 13:23:32.335563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.778 [2024-07-15 13:23:32.335573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85376 len:8 PRP1 0x0 PRP2 0x0 00:24:49.778 [2024-07-15 13:23:32.335586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.335599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.778 [2024-07-15 13:23:32.335608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.778 [2024-07-15 13:23:32.335618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85384 len:8 PRP1 0x0 PRP2 0x0 00:24:49.778 [2024-07-15 13:23:32.335632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.335645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.778 [2024-07-15 13:23:32.335654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.778 [2024-07-15 13:23:32.335665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85392 len:8 PRP1 0x0 PRP2 0x0 00:24:49.778 [2024-07-15 13:23:32.335678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.335692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.778 [2024-07-15 13:23:32.335702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.778 [2024-07-15 13:23:32.335712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85400 len:8 PRP1 0x0 PRP2 0x0 00:24:49.778 [2024-07-15 13:23:32.335725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.335791] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xba42c0 was disconnected and freed. reset controller. 00:24:49.778 [2024-07-15 13:23:32.335814] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:49.778 [2024-07-15 13:23:32.335885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.778 [2024-07-15 13:23:32.335905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.335921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.778 [2024-07-15 13:23:32.335934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.335948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.778 [2024-07-15 13:23:32.335961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.778 [2024-07-15 13:23:32.335985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.779 [2024-07-15 13:23:32.335999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:32.336013] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.779 [2024-07-15 13:23:32.336076] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb850c0 (9): Bad file descriptor 00:24:49.779 [2024-07-15 13:23:32.339951] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.779 [2024-07-15 13:23:32.380135] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:49.779 [2024-07-15 13:23:36.046916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.046963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.046990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.779 [2024-07-15 13:23:36.047859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.779 [2024-07-15 13:23:36.047874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.780 [2024-07-15 13:23:36.047887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.047902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.780 [2024-07-15 13:23:36.047915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.047930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.780 [2024-07-15 13:23:36.047944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.047959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.780 [2024-07-15 13:23:36.047972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.047987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.780 [2024-07-15 13:23:36.048000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.780 [2024-07-15 13:23:36.048044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.780 [2024-07-15 13:23:36.048074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.780 [2024-07-15 13:23:36.048102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.780 [2024-07-15 13:23:36.048130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.780 [2024-07-15 13:23:36.048158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.780 [2024-07-15 13:23:36.048186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.780 [2024-07-15 13:23:36.048227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.780 [2024-07-15 13:23:36.048256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.780 [2024-07-15 13:23:36.048296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.780 [2024-07-15 13:23:36.048961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.780 [2024-07-15 13:23:36.048976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.048989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.049978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.049994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.050015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.050030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.050044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.050060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.050073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.050088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.781 [2024-07-15 13:23:36.050101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.781 [2024-07-15 13:23:36.050116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.782 [2024-07-15 13:23:36.050130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.782 [2024-07-15 13:23:36.050157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.782 [2024-07-15 13:23:36.050186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.782 [2024-07-15 13:23:36.050225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.782 [2024-07-15 13:23:36.050254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.782 [2024-07-15 13:23:36.050282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.782 [2024-07-15 13:23:36.050310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.782 [2024-07-15 13:23:36.050343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.782 [2024-07-15 13:23:36.050378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.782 [2024-07-15 13:23:36.050406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.782 [2024-07-15 13:23:36.050434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.782 [2024-07-15 13:23:36.050462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.782 [2024-07-15 13:23:36.050495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.782 [2024-07-15 13:23:36.050524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.782 [2024-07-15 13:23:36.050559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.782 [2024-07-15 13:23:36.050587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.782 [2024-07-15 13:23:36.050615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.782 [2024-07-15 13:23:36.050642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.782 [2024-07-15 13:23:36.050670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.782 [2024-07-15 13:23:36.050699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.782 [2024-07-15 13:23:36.050747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.782 [2024-07-15 13:23:36.050776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.782 [2024-07-15 13:23:36.050804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.782 [2024-07-15 13:23:36.050839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.782 [2024-07-15 13:23:36.050867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.782 [2024-07-15 13:23:36.050895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.782 [2024-07-15 13:23:36.050923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.050962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.782 [2024-07-15 13:23:36.050976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.782 [2024-07-15 13:23:36.050988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96976 len:8 PRP1 0x0 PRP2 0x0 00:24:49.782 [2024-07-15 13:23:36.051007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.051076] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd514c0 was disconnected and freed. reset controller. 00:24:49.782 [2024-07-15 13:23:36.051093] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:49.782 [2024-07-15 13:23:36.051149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.782 [2024-07-15 13:23:36.051169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.051184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.782 [2024-07-15 13:23:36.051197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.051225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.782 [2024-07-15 13:23:36.051251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.051266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.782 [2024-07-15 13:23:36.051292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:36.051306] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.782 [2024-07-15 13:23:36.051348] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb850c0 (9): Bad file descriptor 00:24:49.782 [2024-07-15 13:23:36.055226] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.782 [2024-07-15 13:23:36.092863] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:49.782 [2024-07-15 13:23:40.660536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.782 [2024-07-15 13:23:40.660613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.782 [2024-07-15 13:23:40.660641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.782 [2024-07-15 13:23:40.660658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.660673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.660687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.660702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.660715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.660730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.660744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.660759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.660772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.660787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.660801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.660816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.660829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.660844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.660857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.660872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.660886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.660901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.660943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.660960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.660973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.660989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.783 [2024-07-15 13:23:40.661767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.783 [2024-07-15 13:23:40.661781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.661796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.784 [2024-07-15 13:23:40.661810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.661825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.784 [2024-07-15 13:23:40.661838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.661853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.784 [2024-07-15 13:23:40.661867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.661882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.784 [2024-07-15 13:23:40.661895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.661910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.784 [2024-07-15 13:23:40.661924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.661939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.784 [2024-07-15 13:23:40.661953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.661968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.784 [2024-07-15 13:23:40.661982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.661998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.784 [2024-07-15 13:23:40.662012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.662027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.784 [2024-07-15 13:23:40.662040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.662055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.784 [2024-07-15 13:23:40.662075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.662092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.784 [2024-07-15 13:23:40.662106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.662122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.784 [2024-07-15 13:23:40.662135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.662150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.784 [2024-07-15 13:23:40.662164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.662179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.784 [2024-07-15 13:23:40.662192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.662217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.784 [2024-07-15 13:23:40.662233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.662248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.784 [2024-07-15 13:23:40.662262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.662276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.784 [2024-07-15 13:23:40.662290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.662305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.784 [2024-07-15 13:23:40.662319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.662334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.784 [2024-07-15 13:23:40.662348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.662363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.784 [2024-07-15 13:23:40.662377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.662395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.784 [2024-07-15 13:23:40.662409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.662423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.784 [2024-07-15 13:23:40.662437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.662452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.784 [2024-07-15 13:23:40.662474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.662490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.784 [2024-07-15 13:23:40.662504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.662519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.784 [2024-07-15 13:23:40.662533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.662548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.784 [2024-07-15 13:23:40.662562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.662577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.784 [2024-07-15 13:23:40.662591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.784 [2024-07-15 13:23:40.662606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.784 [2024-07-15 13:23:40.662619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.662635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.785 [2024-07-15 13:23:40.662648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.662663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.785 [2024-07-15 13:23:40.662676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.662691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.785 [2024-07-15 13:23:40.662726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.662744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.785 [2024-07-15 13:23:40.662757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.662772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.785 [2024-07-15 13:23:40.662785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.662801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.785 [2024-07-15 13:23:40.662814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.662829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.785 [2024-07-15 13:23:40.662842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.662866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.785 [2024-07-15 13:23:40.662882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.662897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.785 [2024-07-15 13:23:40.662910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.662925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.785 [2024-07-15 13:23:40.662939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.662964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.785 [2024-07-15 13:23:40.662977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.662992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.785 [2024-07-15 13:23:40.663005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.663020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.785 [2024-07-15 13:23:40.663033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.663049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.785 [2024-07-15 13:23:40.663063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.663078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.785 [2024-07-15 13:23:40.663091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.663106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.785 [2024-07-15 13:23:40.663120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.663135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.785 [2024-07-15 13:23:40.663149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.663163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.785 [2024-07-15 13:23:40.663177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.663192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.785 [2024-07-15 13:23:40.663214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.663231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.785 [2024-07-15 13:23:40.663252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.663267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.785 [2024-07-15 13:23:40.663282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.663297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.785 [2024-07-15 13:23:40.663311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.663326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.785 [2024-07-15 13:23:40.663339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.663355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.785 [2024-07-15 13:23:40.663368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.663383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.785 [2024-07-15 13:23:40.663396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.663411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.785 [2024-07-15 13:23:40.663424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.663439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.785 [2024-07-15 13:23:40.663453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.663468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.785 [2024-07-15 13:23:40.663482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.785 [2024-07-15 13:23:40.663496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.786 [2024-07-15 13:23:40.663520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.663537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.786 [2024-07-15 13:23:40.663551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.663566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.786 [2024-07-15 13:23:40.663580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.663595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.786 [2024-07-15 13:23:40.663609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.663630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.786 [2024-07-15 13:23:40.663644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.663659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.786 [2024-07-15 13:23:40.663673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.663688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.786 [2024-07-15 13:23:40.663702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.663717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.786 [2024-07-15 13:23:40.663731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.663745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.786 [2024-07-15 13:23:40.663759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.663774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.786 [2024-07-15 13:23:40.663788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.663803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.786 [2024-07-15 13:23:40.663816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.663831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.786 [2024-07-15 13:23:40.663845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.663859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.786 [2024-07-15 13:23:40.663873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.663888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.786 [2024-07-15 13:23:40.663901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.663916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.786 [2024-07-15 13:23:40.663930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.663945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.786 [2024-07-15 13:23:40.663958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.663973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.786 [2024-07-15 13:23:40.663997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.664033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.786 [2024-07-15 13:23:40.664049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:8 PRP1 0x0 PRP2 0x0 00:24:49.786 [2024-07-15 13:23:40.664063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.664081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.786 [2024-07-15 13:23:40.664092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.786 [2024-07-15 13:23:40.664102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24680 len:8 PRP1 0x0 PRP2 0x0 00:24:49.786 [2024-07-15 13:23:40.664116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.664130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.786 [2024-07-15 13:23:40.664140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.786 [2024-07-15 13:23:40.664150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24688 len:8 PRP1 0x0 PRP2 0x0 00:24:49.786 [2024-07-15 13:23:40.664163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.664176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.786 [2024-07-15 13:23:40.664186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.786 [2024-07-15 13:23:40.664196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24696 len:8 PRP1 0x0 PRP2 0x0 00:24:49.786 [2024-07-15 13:23:40.664221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.664236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.786 [2024-07-15 13:23:40.664246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.786 [2024-07-15 13:23:40.664256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:8 PRP1 0x0 PRP2 0x0 00:24:49.786 [2024-07-15 13:23:40.664269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.664283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.786 [2024-07-15 13:23:40.664292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.786 [2024-07-15 13:23:40.664302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24712 len:8 PRP1 0x0 PRP2 0x0 00:24:49.786 [2024-07-15 13:23:40.664315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.786 [2024-07-15 13:23:40.664329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.786 [2024-07-15 13:23:40.664338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.786 [2024-07-15 13:23:40.664348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24720 len:8 PRP1 0x0 PRP2 0x0 00:24:49.786 [2024-07-15 13:23:40.664361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.787 [2024-07-15 13:23:40.664374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.787 [2024-07-15 13:23:40.664384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.787 [2024-07-15 13:23:40.664395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24728 len:8 PRP1 0x0 PRP2 0x0 00:24:49.787 [2024-07-15 13:23:40.664416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.787 [2024-07-15 13:23:40.664435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.787 [2024-07-15 13:23:40.664445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.787 [2024-07-15 13:23:40.664456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23872 len:8 PRP1 0x0 PRP2 0x0 00:24:49.787 [2024-07-15 13:23:40.664469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.787 [2024-07-15 13:23:40.664482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.787 [2024-07-15 13:23:40.664492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.787 [2024-07-15 13:23:40.664501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23880 len:8 PRP1 0x0 PRP2 0x0 00:24:49.787 [2024-07-15 13:23:40.664514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.787 [2024-07-15 13:23:40.664527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.787 [2024-07-15 13:23:40.664537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.787 [2024-07-15 13:23:40.664546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23888 len:8 PRP1 0x0 PRP2 0x0 00:24:49.787 [2024-07-15 13:23:40.664559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.787 [2024-07-15 13:23:40.664572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.787 [2024-07-15 13:23:40.664582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.787 [2024-07-15 13:23:40.664592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23896 len:8 PRP1 0x0 PRP2 0x0 00:24:49.787 [2024-07-15 13:23:40.664611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.787 [2024-07-15 13:23:40.664624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.787 [2024-07-15 13:23:40.664633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.787 [2024-07-15 13:23:40.664643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:8 PRP1 0x0 PRP2 0x0 00:24:49.787 [2024-07-15 13:23:40.664656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.787 [2024-07-15 13:23:40.664670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.787 [2024-07-15 13:23:40.664679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.787 [2024-07-15 13:23:40.664689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23912 len:8 PRP1 0x0 PRP2 0x0 00:24:49.787 [2024-07-15 13:23:40.664702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.787 [2024-07-15 13:23:40.664715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.787 [2024-07-15 13:23:40.664724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.787 [2024-07-15 13:23:40.664734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23920 len:8 PRP1 0x0 PRP2 0x0 00:24:49.787 [2024-07-15 13:23:40.664746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.787 [2024-07-15 13:23:40.664760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:49.787 [2024-07-15 13:23:40.664769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:49.787 [2024-07-15 13:23:40.664785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23928 len:8 PRP1 0x0 PRP2 0x0 00:24:49.787 [2024-07-15 13:23:40.664799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.787 [2024-07-15 13:23:40.664868] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd512b0 was disconnected and freed. reset controller. 00:24:49.787 [2024-07-15 13:23:40.664887] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:49.787 [2024-07-15 13:23:40.664949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.787 [2024-07-15 13:23:40.664969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.787 [2024-07-15 13:23:40.664984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.787 [2024-07-15 13:23:40.664998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.787 [2024-07-15 13:23:40.665012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.787 [2024-07-15 13:23:40.665025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.787 [2024-07-15 13:23:40.665038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.787 [2024-07-15 13:23:40.665051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.787 [2024-07-15 13:23:40.665065] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.787 [2024-07-15 13:23:40.665119] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb850c0 (9): Bad file descriptor 00:24:49.787 [2024-07-15 13:23:40.668975] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.787 [2024-07-15 13:23:40.709591] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:49.787 00:24:49.787 Latency(us) 00:24:49.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.787 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:49.787 Verification LBA range: start 0x0 length 0x4000 00:24:49.787 NVMe0n1 : 15.00 8502.12 33.21 245.06 0.00 14601.58 614.40 20733.21 00:24:49.787 =================================================================================================================== 00:24:49.787 Total : 8502.12 33.21 245.06 0.00 14601.58 614.40 20733.21 00:24:49.787 Received shutdown signal, test time was about 15.000000 seconds 00:24:49.787 00:24:49.787 Latency(us) 00:24:49.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.787 =================================================================================================================== 00:24:49.787 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:49.787 13:23:46 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:49.787 13:23:46 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:49.787 13:23:46 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:49.788 13:23:46 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:49.788 13:23:46 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=106658 00:24:49.788 13:23:46 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 106658 /var/tmp/bdevperf.sock 00:24:49.788 13:23:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 106658 ']' 00:24:49.788 13:23:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:49.788 13:23:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:49.788 13:23:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:49.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:49.788 13:23:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:49.788 13:23:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:50.353 13:23:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:50.353 13:23:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:50.353 13:23:46 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:50.353 [2024-07-15 13:23:47.045945] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:50.353 13:23:47 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:50.611 [2024-07-15 13:23:47.298148] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:50.611 13:23:47 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:51.175 NVMe0n1 00:24:51.175 13:23:47 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:51.434 00:24:51.434 13:23:48 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:51.692 00:24:51.692 13:23:48 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:51.692 13:23:48 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:52.258 13:23:48 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:52.258 13:23:48 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:55.559 13:23:51 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:55.559 13:23:51 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:55.816 13:23:52 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=106787 00:24:55.816 13:23:52 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:55.816 13:23:52 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 106787 00:24:56.809 0 00:24:57.068 13:23:53 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:57.068 [2024-07-15 13:23:46.468894] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:57.068 [2024-07-15 13:23:46.469040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106658 ] 00:24:57.068 [2024-07-15 13:23:46.603786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.068 [2024-07-15 13:23:46.702542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.068 [2024-07-15 13:23:48.932323] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:57.068 [2024-07-15 13:23:48.932456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.068 [2024-07-15 13:23:48.932480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.068 [2024-07-15 13:23:48.932499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.068 [2024-07-15 13:23:48.932513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.068 [2024-07-15 13:23:48.932527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.068 [2024-07-15 13:23:48.932541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.068 [2024-07-15 13:23:48.932555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.068 [2024-07-15 13:23:48.932568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.068 [2024-07-15 13:23:48.932582] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.068 [2024-07-15 13:23:48.932634] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.068 [2024-07-15 13:23:48.932666] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e90c0 (9): Bad file descriptor 00:24:57.068 [2024-07-15 13:23:48.943924] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:57.068 Running I/O for 1 seconds... 00:24:57.068 00:24:57.068 Latency(us) 00:24:57.068 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.068 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:57.068 Verification LBA range: start 0x0 length 0x4000 00:24:57.068 NVMe0n1 : 1.01 8398.40 32.81 0.00 0.00 15165.60 2338.44 15609.48 00:24:57.068 =================================================================================================================== 00:24:57.068 Total : 8398.40 32.81 0.00 0.00 15165.60 2338.44 15609.48 00:24:57.068 13:23:53 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:57.068 13:23:53 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:57.325 13:23:53 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:57.583 13:23:54 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:57.583 13:23:54 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:57.840 13:23:54 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:58.404 13:23:54 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:01.680 13:23:57 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:01.680 13:23:57 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:01.680 13:23:58 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 106658 00:25:01.680 13:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 106658 ']' 00:25:01.680 13:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 106658 00:25:01.680 13:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:25:01.680 13:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:01.680 13:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 106658 00:25:01.680 killing process with pid 106658 00:25:01.680 13:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:01.680 13:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:01.680 13:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 106658' 00:25:01.680 13:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 106658 00:25:01.680 13:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 106658 00:25:01.937 13:23:58 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:01.937 13:23:58 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:02.194 rmmod nvme_tcp 00:25:02.194 rmmod nvme_fabrics 00:25:02.194 rmmod nvme_keyring 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 106315 ']' 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 106315 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 106315 ']' 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 106315 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 106315 00:25:02.194 killing process with pid 106315 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 106315' 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 106315 00:25:02.194 13:23:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 106315 00:25:02.451 13:23:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:02.451 13:23:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:02.451 13:23:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:02.451 13:23:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:02.451 13:23:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:02.451 13:23:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.451 13:23:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.452 13:23:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.452 13:23:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:02.452 ************************************ 00:25:02.452 END TEST nvmf_failover 00:25:02.452 ************************************ 00:25:02.452 00:25:02.452 real 0m32.641s 00:25:02.452 user 2m7.545s 00:25:02.452 sys 0m4.792s 00:25:02.452 13:23:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:02.452 13:23:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:02.452 13:23:59 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:02.452 13:23:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:02.452 13:23:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:02.452 13:23:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:02.452 ************************************ 00:25:02.452 START TEST nvmf_host_discovery 00:25:02.452 ************************************ 00:25:02.452 13:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:02.710 * Looking for test storage... 00:25:02.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:02.710 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:02.710 Cannot find device "nvmf_tgt_br" 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:02.711 Cannot find device "nvmf_tgt_br2" 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:02.711 Cannot find device "nvmf_tgt_br" 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:02.711 Cannot find device "nvmf_tgt_br2" 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:02.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:02.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:02.711 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:02.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:02.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:25:02.969 00:25:02.969 --- 10.0.0.2 ping statistics --- 00:25:02.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.969 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:02.969 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:02.969 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:25:02.969 00:25:02.969 --- 10.0.0.3 ping statistics --- 00:25:02.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.969 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:02.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:02.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:25:02.969 00:25:02.969 --- 10.0.0.1 ping statistics --- 00:25:02.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.969 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=107092 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 107092 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 107092 ']' 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:02.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:02.969 13:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.969 [2024-07-15 13:23:59.604628] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:02.969 [2024-07-15 13:23:59.604737] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.227 [2024-07-15 13:23:59.742797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.227 [2024-07-15 13:23:59.847648] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.227 [2024-07-15 13:23:59.847721] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.227 [2024-07-15 13:23:59.847736] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.227 [2024-07-15 13:23:59.847747] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.227 [2024-07-15 13:23:59.847756] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.227 [2024-07-15 13:23:59.847800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.162 [2024-07-15 13:24:00.671023] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.162 [2024-07-15 13:24:00.679150] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.162 null0 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.162 null1 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=107142 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 107142 /tmp/host.sock 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 107142 ']' 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:04.162 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:04.162 13:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.162 [2024-07-15 13:24:00.780561] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:04.162 [2024-07-15 13:24:00.780714] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107142 ] 00:25:04.419 [2024-07-15 13:24:00.925801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.419 [2024-07-15 13:24:01.026425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.352 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.353 13:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:05.353 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.611 [2024-07-15 13:24:02.123532] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.611 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:05.612 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:05.612 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:05.612 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:05.612 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:05.612 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:05.612 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:05.612 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.612 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.612 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:05.612 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:05.612 13:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:05.612 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.870 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:25:05.870 13:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:25:06.128 [2024-07-15 13:24:02.774467] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:06.128 [2024-07-15 13:24:02.774514] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:06.128 [2024-07-15 13:24:02.774534] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:06.128 [2024-07-15 13:24:02.860639] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:06.386 [2024-07-15 13:24:02.916863] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:06.386 [2024-07-15 13:24:02.916931] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:06.646 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:06.646 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:06.646 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:06.646 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:06.646 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:06.646 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:06.646 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.646 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:06.646 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:06.906 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.907 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.165 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.165 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:07.165 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:07.165 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:07.165 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:07.165 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:07.165 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:07.165 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:07.165 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:07.165 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:07.165 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:07.165 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:07.165 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.165 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.166 [2024-07-15 13:24:03.720853] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:07.166 [2024-07-15 13:24:03.721428] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:07.166 [2024-07-15 13:24:03.721461] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.166 [2024-07-15 13:24:03.807487] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.166 [2024-07-15 13:24:03.866833] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:07.166 [2024-07-15 13:24:03.866873] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:07.166 [2024-07-15 13:24:03.866882] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:07.166 13:24:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.541 [2024-07-15 13:24:04.989641] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:08.541 [2024-07-15 13:24:04.989685] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.541 [2024-07-15 13:24:04.994190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:08.541 [2024-07-15 13:24:04.994237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.541 [2024-07-15 13:24:04.994252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.541 [2024-07-15 13:24:04.994262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.541 [2024-07-15 13:24:04.994273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.541 [2024-07-15 13:24:04.994283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.541 [2024-07-15 13:24:04.994293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.541 [2024-07-15 13:24:04.994303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.541 [2024-07-15 13:24:04.994312] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff29c0 is same with the state(5) to be set 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:08.541 13:24:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.541 [2024-07-15 13:24:05.004147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff29c0 (9): Bad file descriptor 00:25:08.541 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.541 [2024-07-15 13:24:05.014171] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:08.541 [2024-07-15 13:24:05.014328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.541 [2024-07-15 13:24:05.014354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff29c0 with addr=10.0.0.2, port=4420 00:25:08.541 [2024-07-15 13:24:05.014368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff29c0 is same with the state(5) to be set 00:25:08.541 [2024-07-15 13:24:05.014387] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff29c0 (9): Bad file descriptor 00:25:08.541 [2024-07-15 13:24:05.014403] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:08.541 [2024-07-15 13:24:05.014424] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:08.541 [2024-07-15 13:24:05.014436] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:08.541 [2024-07-15 13:24:05.014454] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.541 [2024-07-15 13:24:05.024247] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:08.542 [2024-07-15 13:24:05.024356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.542 [2024-07-15 13:24:05.024378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff29c0 with addr=10.0.0.2, port=4420 00:25:08.542 [2024-07-15 13:24:05.024390] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff29c0 is same with the state(5) to be set 00:25:08.542 [2024-07-15 13:24:05.024408] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff29c0 (9): Bad file descriptor 00:25:08.542 [2024-07-15 13:24:05.024423] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:08.542 [2024-07-15 13:24:05.024433] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:08.542 [2024-07-15 13:24:05.024443] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:08.542 [2024-07-15 13:24:05.024459] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.542 [2024-07-15 13:24:05.034319] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:08.542 [2024-07-15 13:24:05.034447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.542 [2024-07-15 13:24:05.034471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff29c0 with addr=10.0.0.2, port=4420 00:25:08.542 [2024-07-15 13:24:05.034484] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff29c0 is same with the state(5) to be set 00:25:08.542 [2024-07-15 13:24:05.034502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff29c0 (9): Bad file descriptor 00:25:08.542 [2024-07-15 13:24:05.034518] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:08.542 [2024-07-15 13:24:05.034529] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:08.542 [2024-07-15 13:24:05.034540] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:08.542 [2024-07-15 13:24:05.034556] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.542 [2024-07-15 13:24:05.044400] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:08.542 [2024-07-15 13:24:05.044527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.542 [2024-07-15 13:24:05.044549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff29c0 with addr=10.0.0.2, port=4420 00:25:08.542 [2024-07-15 13:24:05.044561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff29c0 is same with the state(5) to be set 00:25:08.542 [2024-07-15 13:24:05.044579] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff29c0 (9): Bad file descriptor 00:25:08.542 [2024-07-15 13:24:05.044595] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:08.542 [2024-07-15 13:24:05.044605] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:08.542 [2024-07-15 13:24:05.044615] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:08.542 [2024-07-15 13:24:05.044631] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.542 [2024-07-15 13:24:05.054476] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:08.542 [2024-07-15 13:24:05.054592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.542 [2024-07-15 13:24:05.054615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff29c0 with addr=10.0.0.2, port=4420 00:25:08.542 [2024-07-15 13:24:05.054627] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff29c0 is same with the state(5) to be set 00:25:08.542 [2024-07-15 13:24:05.054645] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff29c0 (9): Bad file descriptor 00:25:08.542 [2024-07-15 13:24:05.054673] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:08.542 [2024-07-15 13:24:05.054685] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:08.542 [2024-07-15 13:24:05.054696] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:08.542 [2024-07-15 13:24:05.054732] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:08.542 [2024-07-15 13:24:05.064541] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:08.542 [2024-07-15 13:24:05.064649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.542 [2024-07-15 13:24:05.064672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff29c0 with addr=10.0.0.2, port=4420 00:25:08.542 [2024-07-15 13:24:05.064684] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff29c0 is same with the state(5) to be set 00:25:08.542 [2024-07-15 13:24:05.064702] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff29c0 (9): Bad file descriptor 00:25:08.542 [2024-07-15 13:24:05.064728] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:08.542 [2024-07-15 13:24:05.064739] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:08.542 [2024-07-15 13:24:05.064750] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:08.542 [2024-07-15 13:24:05.064766] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.542 [2024-07-15 13:24:05.074606] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:08.542 [2024-07-15 13:24:05.074765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.542 [2024-07-15 13:24:05.074790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff29c0 with addr=10.0.0.2, port=4420 00:25:08.542 [2024-07-15 13:24:05.074804] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff29c0 is same with the state(5) to be set 00:25:08.542 [2024-07-15 13:24:05.074823] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff29c0 (9): Bad file descriptor 00:25:08.542 [2024-07-15 13:24:05.074854] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:08.542 [2024-07-15 13:24:05.074867] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:08.542 [2024-07-15 13:24:05.074878] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:08.542 [2024-07-15 13:24:05.074894] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.542 [2024-07-15 13:24:05.075511] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:08.542 [2024-07-15 13:24:05.075544] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:08.542 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:08.543 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:08.543 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.543 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.543 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.543 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:08.543 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:08.543 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:08.543 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:08.543 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:08.543 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:08.543 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:08.543 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.543 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.543 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:08.543 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:08.543 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:08.543 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.801 13:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.734 [2024-07-15 13:24:06.435047] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:09.734 [2024-07-15 13:24:06.435078] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:09.734 [2024-07-15 13:24:06.435099] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:09.992 [2024-07-15 13:24:06.521187] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:09.992 [2024-07-15 13:24:06.580870] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:09.992 [2024-07-15 13:24:06.580946] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:09.992 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.992 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:09.992 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:09.992 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:09.992 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:09.992 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:09.992 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:09.992 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:09.992 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:09.992 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.992 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.992 2024/07/15 13:24:06 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:25:09.992 request: 00:25:09.992 { 00:25:09.992 "method": "bdev_nvme_start_discovery", 00:25:09.992 "params": { 00:25:09.992 "name": "nvme", 00:25:09.992 "trtype": "tcp", 00:25:09.992 "traddr": "10.0.0.2", 00:25:09.992 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:09.992 "adrfam": "ipv4", 00:25:09.992 "trsvcid": "8009", 00:25:09.992 "wait_for_attach": true 00:25:09.992 } 00:25:09.992 } 00:25:09.992 Got JSON-RPC error response 00:25:09.992 GoRPCClient: error on JSON-RPC call 00:25:09.992 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:09.992 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:09.992 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:09.992 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:09.992 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:09.992 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:09.992 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.993 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.250 2024/07/15 13:24:06 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:25:10.250 request: 00:25:10.250 { 00:25:10.250 "method": "bdev_nvme_start_discovery", 00:25:10.250 "params": { 00:25:10.250 "name": "nvme_second", 00:25:10.250 "trtype": "tcp", 00:25:10.250 "traddr": "10.0.0.2", 00:25:10.250 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:10.250 "adrfam": "ipv4", 00:25:10.250 "trsvcid": "8009", 00:25:10.250 "wait_for_attach": true 00:25:10.250 } 00:25:10.250 } 00:25:10.250 Got JSON-RPC error response 00:25:10.250 GoRPCClient: error on JSON-RPC call 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.250 13:24:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.182 [2024-07-15 13:24:07.870598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.182 [2024-07-15 13:24:07.870684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100b260 with addr=10.0.0.2, port=8010 00:25:11.182 [2024-07-15 13:24:07.870710] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:11.182 [2024-07-15 13:24:07.870731] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:11.182 [2024-07-15 13:24:07.870743] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:12.554 [2024-07-15 13:24:08.870577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.554 [2024-07-15 13:24:08.870648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1021790 with addr=10.0.0.2, port=8010 00:25:12.554 [2024-07-15 13:24:08.870673] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:12.554 [2024-07-15 13:24:08.870684] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:12.554 [2024-07-15 13:24:08.870694] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:13.488 [2024-07-15 13:24:09.870431] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:13.488 2024/07/15 13:24:09 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:25:13.488 request: 00:25:13.488 { 00:25:13.488 "method": "bdev_nvme_start_discovery", 00:25:13.488 "params": { 00:25:13.488 "name": "nvme_second", 00:25:13.488 "trtype": "tcp", 00:25:13.488 "traddr": "10.0.0.2", 00:25:13.488 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:13.488 "adrfam": "ipv4", 00:25:13.488 "trsvcid": "8010", 00:25:13.488 "attach_timeout_ms": 3000 00:25:13.488 } 00:25:13.488 } 00:25:13.488 Got JSON-RPC error response 00:25:13.488 GoRPCClient: error on JSON-RPC call 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 107142 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:13.488 13:24:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:13.488 rmmod nvme_tcp 00:25:13.488 rmmod nvme_fabrics 00:25:13.488 rmmod nvme_keyring 00:25:13.488 13:24:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:13.488 13:24:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:13.488 13:24:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:13.488 13:24:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 107092 ']' 00:25:13.488 13:24:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 107092 00:25:13.488 13:24:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 107092 ']' 00:25:13.488 13:24:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 107092 00:25:13.488 13:24:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:25:13.488 13:24:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:13.488 13:24:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 107092 00:25:13.488 13:24:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:13.488 13:24:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:13.488 killing process with pid 107092 00:25:13.488 13:24:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 107092' 00:25:13.488 13:24:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 107092 00:25:13.488 13:24:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 107092 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:13.747 00:25:13.747 real 0m11.182s 00:25:13.747 user 0m22.049s 00:25:13.747 sys 0m1.697s 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.747 ************************************ 00:25:13.747 END TEST nvmf_host_discovery 00:25:13.747 ************************************ 00:25:13.747 13:24:10 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:13.747 13:24:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:13.747 13:24:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:13.747 13:24:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.747 ************************************ 00:25:13.747 START TEST nvmf_host_multipath_status 00:25:13.747 ************************************ 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:13.747 * Looking for test storage... 00:25:13.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.747 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:13.748 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:14.006 Cannot find device "nvmf_tgt_br" 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:14.006 Cannot find device "nvmf_tgt_br2" 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:14.006 Cannot find device "nvmf_tgt_br" 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:14.006 Cannot find device "nvmf_tgt_br2" 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:14.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:14.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:14.006 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:14.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:25:14.265 00:25:14.265 --- 10.0.0.2 ping statistics --- 00:25:14.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.265 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:14.265 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:14.265 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:25:14.265 00:25:14.265 --- 10.0.0.3 ping statistics --- 00:25:14.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.265 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:14.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:25:14.265 00:25:14.265 --- 10.0.0.1 ping statistics --- 00:25:14.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.265 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=107626 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 107626 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 107626 ']' 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:14.265 13:24:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:14.265 [2024-07-15 13:24:10.921400] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:14.265 [2024-07-15 13:24:10.921554] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.523 [2024-07-15 13:24:11.062696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:14.523 [2024-07-15 13:24:11.169166] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.523 [2024-07-15 13:24:11.169255] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.524 [2024-07-15 13:24:11.169272] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.524 [2024-07-15 13:24:11.169289] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.524 [2024-07-15 13:24:11.169305] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.524 [2024-07-15 13:24:11.169430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.524 [2024-07-15 13:24:11.169941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.456 13:24:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:15.456 13:24:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:25:15.456 13:24:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:15.456 13:24:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:15.456 13:24:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:15.456 13:24:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.456 13:24:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=107626 00:25:15.456 13:24:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:15.714 [2024-07-15 13:24:12.282662] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.714 13:24:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:15.972 Malloc0 00:25:15.972 13:24:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:16.229 13:24:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:16.486 13:24:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:16.741 [2024-07-15 13:24:13.321173] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.741 13:24:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:17.016 [2024-07-15 13:24:13.565314] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:17.016 13:24:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=107731 00:25:17.016 13:24:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:17.016 13:24:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:17.016 13:24:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 107731 /var/tmp/bdevperf.sock 00:25:17.016 13:24:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 107731 ']' 00:25:17.016 13:24:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:17.016 13:24:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:17.016 13:24:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:17.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:17.016 13:24:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:17.016 13:24:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:17.959 13:24:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:17.959 13:24:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:25:17.959 13:24:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:18.525 13:24:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:18.783 Nvme0n1 00:25:18.783 13:24:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:19.348 Nvme0n1 00:25:19.348 13:24:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:19.348 13:24:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:21.248 13:24:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:21.248 13:24:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:21.505 13:24:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:21.762 13:24:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:22.695 13:24:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:22.695 13:24:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:22.695 13:24:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.695 13:24:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:23.262 13:24:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.262 13:24:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:23.262 13:24:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.262 13:24:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:23.262 13:24:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:23.262 13:24:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:23.262 13:24:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.262 13:24:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:23.589 13:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.589 13:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:23.589 13:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.589 13:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:23.861 13:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.861 13:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:23.861 13:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.861 13:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:24.425 13:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.425 13:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:24.425 13:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.425 13:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:24.683 13:24:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.683 13:24:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:24.683 13:24:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:24.941 13:24:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:25.198 13:24:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:26.129 13:24:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:26.129 13:24:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:26.129 13:24:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.129 13:24:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:26.388 13:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:26.388 13:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:26.388 13:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.388 13:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:26.645 13:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.645 13:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:26.645 13:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.645 13:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:27.210 13:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.210 13:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:27.210 13:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.210 13:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:27.468 13:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.468 13:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:27.468 13:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.468 13:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:27.725 13:24:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.725 13:24:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:27.725 13:24:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.725 13:24:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:27.982 13:24:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.982 13:24:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:27.982 13:24:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:28.240 13:24:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:28.497 13:24:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:29.869 13:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:29.869 13:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:29.869 13:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.869 13:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:29.869 13:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.869 13:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:29.869 13:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:29.869 13:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.181 13:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:30.181 13:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:30.181 13:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.181 13:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:30.453 13:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.453 13:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:30.453 13:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.453 13:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:30.711 13:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.711 13:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:30.711 13:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.711 13:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:30.969 13:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.969 13:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:30.969 13:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.969 13:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:31.227 13:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.227 13:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:31.227 13:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:31.485 13:24:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:31.742 13:24:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:32.685 13:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:32.685 13:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:32.685 13:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.685 13:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:32.943 13:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.943 13:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:32.943 13:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:32.943 13:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.202 13:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:33.202 13:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:33.202 13:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.202 13:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:33.460 13:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.460 13:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:33.460 13:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.460 13:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:33.718 13:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.718 13:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:33.718 13:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.718 13:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:33.976 13:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.976 13:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:33.976 13:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:33.976 13:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.234 13:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:34.234 13:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:34.234 13:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:34.491 13:24:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:34.749 13:24:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:36.123 13:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:36.123 13:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:36.123 13:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.123 13:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:36.123 13:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:36.123 13:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:36.123 13:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.123 13:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:36.381 13:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:36.381 13:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:36.381 13:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.381 13:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:36.642 13:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.642 13:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:36.642 13:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:36.642 13:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.902 13:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.902 13:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:36.902 13:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.902 13:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:37.160 13:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:37.160 13:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:37.160 13:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.160 13:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:37.420 13:24:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:37.420 13:24:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:37.420 13:24:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:37.678 13:24:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:37.936 13:24:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:39.309 13:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:39.309 13:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:39.310 13:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.310 13:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:39.310 13:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:39.310 13:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:39.310 13:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.310 13:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:39.567 13:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.567 13:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:39.567 13:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.567 13:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:39.825 13:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.826 13:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:39.826 13:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:39.826 13:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.083 13:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.083 13:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:40.083 13:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.083 13:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:40.647 13:24:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:40.647 13:24:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:40.647 13:24:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.647 13:24:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:40.904 13:24:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.904 13:24:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:41.163 13:24:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:41.163 13:24:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:41.420 13:24:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:41.678 13:24:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:43.052 13:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:43.052 13:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:43.052 13:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:43.052 13:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.052 13:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.052 13:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:43.052 13:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.052 13:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:43.311 13:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.311 13:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:43.311 13:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.311 13:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:43.569 13:24:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.569 13:24:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:43.569 13:24:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.569 13:24:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:43.836 13:24:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.836 13:24:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:43.836 13:24:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.836 13:24:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:44.113 13:24:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.113 13:24:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:44.114 13:24:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.114 13:24:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:44.680 13:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.680 13:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:44.680 13:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:44.938 13:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:45.197 13:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:46.130 13:24:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:46.130 13:24:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:46.130 13:24:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.130 13:24:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:46.387 13:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:46.387 13:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:46.387 13:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:46.387 13:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.645 13:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.645 13:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:46.645 13:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.645 13:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:46.905 13:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.905 13:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:46.905 13:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:46.905 13:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.471 13:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.471 13:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:47.471 13:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:47.471 13:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.729 13:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.729 13:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:47.729 13:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:47.729 13:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.987 13:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.987 13:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:47.987 13:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:48.245 13:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:48.504 13:24:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:49.439 13:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:49.439 13:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:49.439 13:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.439 13:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:49.697 13:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.697 13:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:49.697 13:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.697 13:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:49.955 13:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.955 13:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:49.955 13:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.955 13:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:50.520 13:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.520 13:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:50.520 13:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:50.520 13:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.778 13:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.778 13:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:50.778 13:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.778 13:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:51.036 13:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.036 13:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:51.036 13:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.036 13:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:51.296 13:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.296 13:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:51.296 13:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:51.554 13:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:51.812 13:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:52.746 13:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:52.746 13:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:52.746 13:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.746 13:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:53.311 13:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.311 13:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:53.311 13:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.311 13:24:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:53.311 13:24:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.311 13:24:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:53.311 13:24:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.311 13:24:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:53.876 13:24:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.876 13:24:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:53.876 13:24:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.876 13:24:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:54.134 13:24:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.134 13:24:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:54.134 13:24:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.134 13:24:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:54.392 13:24:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.392 13:24:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:54.392 13:24:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.392 13:24:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:54.650 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:54.650 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 107731 00:25:54.650 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 107731 ']' 00:25:54.650 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 107731 00:25:54.650 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:25:54.650 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:54.650 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 107731 00:25:54.650 killing process with pid 107731 00:25:54.650 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:25:54.650 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:25:54.650 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 107731' 00:25:54.650 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 107731 00:25:54.650 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 107731 00:25:54.650 Connection closed with partial response: 00:25:54.650 00:25:54.650 00:25:54.922 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 107731 00:25:54.922 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:54.922 [2024-07-15 13:24:13.642349] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:54.922 [2024-07-15 13:24:13.642466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107731 ] 00:25:54.922 [2024-07-15 13:24:13.778266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.922 [2024-07-15 13:24:13.877820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:54.922 Running I/O for 90 seconds... 00:25:54.922 [2024-07-15 13:24:31.122707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.922 [2024-07-15 13:24:31.122853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.122892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.122910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.122937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.122954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.122975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.122989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.123011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.123026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.123047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.123063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.123084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.123100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.123121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.123137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.123159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.123175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.123196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.123226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.123250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.123286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.123311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.123325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.123346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.123360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.123381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.123395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.123417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.123431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.123452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.123467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.123488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.123502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.125828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.125861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.125890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.125906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.125927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.125942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.125963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.125978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.126002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.126017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.126038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.126052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.126089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.126106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.126128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.126143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.126165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.126180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.126202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.126230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.126253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.126269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.126291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.126307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.126328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.126344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.126365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.126380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.126402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.126417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.126438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.126453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.126475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.126490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.126773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.126801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.126840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.126858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.126879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.126902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.126923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.126938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.126959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.126974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.126996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.127012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.127034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.127049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.127070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.127085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.127106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.127121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.127142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.127157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.127179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.127194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.127230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.127249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.127271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.127286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.127308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.127333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.127356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.127372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.127394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.922 [2024-07-15 13:24:31.127410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:54.922 [2024-07-15 13:24:31.127432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.127448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.127471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.127486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.127507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.127529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.127550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.127565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.127587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.127602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.127623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.127638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.127660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.127675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.127696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.127711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.127733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.127747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.127769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.127791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.127814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.127829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.127850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.127865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.127887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.127902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.127924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.127939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.127961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.127976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.127998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.128012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.128034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.128049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.128071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.128086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.128107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.128122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.128153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.128167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.128189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.128214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.128238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.128253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.128283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.128300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.128321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.128337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.128358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.128373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.128395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.128411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.128432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.128447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.128469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.128484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.128507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.128522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.128543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.128557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.128579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.128594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.130004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.130031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.130057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.130072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.130095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.130110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.130143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.130159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.130180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.130195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.130230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.130246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.130267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.130282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.130304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.130319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.130340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.130354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.130375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.130390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.130411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.130425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.130446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.130460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.130481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.130497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.130518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.130533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.132190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.132233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.132274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.132291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.132313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.132328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.132350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.132365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.132387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.132402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.132423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.132438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.132459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.132474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.132495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.132509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.132531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.132545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.132567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.132581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.132603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.923 [2024-07-15 13:24:31.132617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.133087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.133113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.133139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.133155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.133176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.133218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.133244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.133260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:54.923 [2024-07-15 13:24:31.133281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.923 [2024-07-15 13:24:31.133301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.133322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.133337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.133358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.133372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.133393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.133408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.133430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.133444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.133466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.133480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.133501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.133516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.133538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.133552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.133573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.133587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.133608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.133623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.133644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.133667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.133690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.133705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.133727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.133741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.133762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.133777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.133798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.133813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.133834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.133849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.133870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.133885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.133906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.133920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.133941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.133956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.133976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.133991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.134012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.134026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.134048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.134062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.134084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.134098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.134130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.134147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.134168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.134182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.134213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.134231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.134253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.134268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.134289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.134304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.134325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.134340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.134370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.134385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.134406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.134420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.134441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.134455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.134477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.134491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.134512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.134527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.134548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.134562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.135141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.135168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.135194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.135225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.135250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.135266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.135287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.135302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.135324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.135338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.135359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.135373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.135394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.135408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.135430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.135444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.135465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.135480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.135500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.135514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.135535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.135550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.135571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.135585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.135605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.135634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.135658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.135673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.135694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.135708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.135736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.135752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.135773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.135787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.135808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.135822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.135850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.135865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.135886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.135900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:54.924 [2024-07-15 13:24:31.135921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.924 [2024-07-15 13:24:31.135936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.135957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.135971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.135992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.136006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.136027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.136041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.136062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.136084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.136106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.136121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.136142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.136156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.136178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.136193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.136226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.136243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.136264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.136279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.136299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.136314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.136339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.136354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.136375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.136390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.136411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.136425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.136452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.136467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.136488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.136502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.136524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.136546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.136569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.136584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.136604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.136618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.136641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.136655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.137977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.137998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.138012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.138039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.138054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.138075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.138089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.138111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.138125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.138152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.138167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.138188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.138212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.138236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.138252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.138273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.138287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.138308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.138323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.138352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.138367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.138388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.138402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:54.925 [2024-07-15 13:24:31.138423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.925 [2024-07-15 13:24:31.138447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.138469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.138484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.138505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.138519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.138541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.138555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.138576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.138590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.138611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.138625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.138651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.138667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.138688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.138702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.138723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.138749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.138778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.138794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.138816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.138830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.138851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.138865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.138886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.138909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.138931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.138946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.138967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.138982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.139003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.926 [2024-07-15 13:24:31.139018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.139039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.139053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.139074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.139089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.139110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.139124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.139145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.139159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.139180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.139194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.139229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.139245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.139272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.139288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.139309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.139324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.140968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.140982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.141011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.141027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.141048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.141062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.141083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.141097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.141118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.141132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.141153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.141167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.141188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.148815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.148884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.148904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.148927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.148943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.148966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.148980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.149001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.149015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.149036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.149051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.149071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.926 [2024-07-15 13:24:31.149086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:54.926 [2024-07-15 13:24:31.149107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.149137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.149161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.149176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.149197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.149227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.149250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.149266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.149287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.149301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.149321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.149335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.149356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.149370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.149391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.149405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.149425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.149440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.149460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.149474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.149496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.149509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.150296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.150326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.150355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.150389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.150420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.150436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.150457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.150472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.150492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.150507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.150528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.150543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.150564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.150578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.150599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.150613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.150634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.150648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.150669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.150683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.150704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.150718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.150755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.150788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.150814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.150832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.150858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.150875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.150914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.150933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.150960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.150978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.151963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.151981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.152007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.152034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.152061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.152080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.152106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.152124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.152151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.152169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.152195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.152226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:54.927 [2024-07-15 13:24:31.152254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.927 [2024-07-15 13:24:31.152272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.152298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.152317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.152343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.152360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.152387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.152404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.152431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.152449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.152475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.152493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.152520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.152538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.152564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.152591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.152619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.152638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.152664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.152682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.152708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.152726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.152752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.152770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.152795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.152813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.152839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.152856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.152883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.152900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.152926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.152944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.152970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.152988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.153014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.153031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.153057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.153074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.153101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.153118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.153153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.153172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.153198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.153232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.153260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.153278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.153304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.153322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.153361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.153380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.153407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.153424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.153450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.153468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.153494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.153512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.153538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.928 [2024-07-15 13:24:31.153556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.153583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.153600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.153626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.153644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.153669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.153687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.153724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.153743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.153769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.153787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.153813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.153830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.153865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.153883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.155174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.155226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.155263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.155284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.155310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.155328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.155354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.155372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.155399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.155417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.155443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.155460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.155486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.155504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.155530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.155548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.155589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.155609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.155636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.155653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.155679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.155697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.155723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.155740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.155767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.155784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.155810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.155828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.155854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.155871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.155897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.155915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.155940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.155958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.155984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.156002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.156028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.156046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.156072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.156090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.156116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.156144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.156171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.156190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.156231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.156252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:54.928 [2024-07-15 13:24:31.156278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.928 [2024-07-15 13:24:31.156296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.156322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.156340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.156366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.156384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.156410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.156428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.156455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.156473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.156498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.156516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.156542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.156560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.156586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.156603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.156629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.156647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.156673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.156702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.156730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.156748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.156774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.156793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.156819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.156838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.156864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.156882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.156909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.156927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.156953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.156971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.156997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.157015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.157042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.157060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.157086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.157104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.157130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.157148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.157175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.157193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.157231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.157251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.157289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.157308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.157334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.157352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.157379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.157397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.158179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.158225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.158259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.158279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.158306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.158325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.158352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.158369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.158395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.158413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.158439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.158457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.158483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.158500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.158527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.158545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.158571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.158588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.158629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.158648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.158674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.158692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.158718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.158748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.158776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.158794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.158820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.158838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.158864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.158882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.158908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.158926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.158952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.158969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.158995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.159014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.159040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.159058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.159084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.159101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.159127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.159145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.159193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.159229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.159258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.159276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.159303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.159320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.159346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.159364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.159391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.159408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.159434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.159452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.159478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.159496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.159522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.159539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.159566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.159584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.159609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.159627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.159653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.159671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.159697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.159714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.159741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.159768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.159796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.159814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.159840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.929 [2024-07-15 13:24:31.159857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:54.929 [2024-07-15 13:24:31.159883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.159901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.159927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.159945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.159971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.159988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.160970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.160988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.161014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.161031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.161057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.161074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.161101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.161118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.161144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.161162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.161187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.161218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.161248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.161266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.161292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.161309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.161335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.161353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.161378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.161396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.161432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.161450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.161476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.161494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.161520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.930 [2024-07-15 13:24:31.161538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.161564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.161582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.161608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.161626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.161652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.161669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.161695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.161713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.161739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.161757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.161783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.161800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.163082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.163110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.163137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.163153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.163174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.163189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.163211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.163255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.163281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.163296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.163318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.163332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.163353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.163367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.163388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.163402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.163422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.163437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.163457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.163472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.163492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.930 [2024-07-15 13:24:31.163507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.930 [2024-07-15 13:24:31.163527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.163542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.163563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.163577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.163597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.163612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.163632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.163647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.163667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.163689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.163712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.163726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.163747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.163761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.163782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.163796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.163817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.163831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.163852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.163866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.163887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.163901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.163921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.163935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.163956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.163970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.163990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.164827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.164841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.165477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.165502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.165528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.165544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.165566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.165581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.165602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.165617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.165649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.165665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.165686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.165700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.165721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.165735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.165756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.165771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.165791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.165805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.165826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.165841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.165861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.165875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.165896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.165910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.165931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.165945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.165966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.165980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.166001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.166015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.166036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.166050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.166071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.166092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.166115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.166130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.166150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.166164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.166185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.166199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.166235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.166251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.166276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.166291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.166311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.166326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.166346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.931 [2024-07-15 13:24:31.166360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:54.931 [2024-07-15 13:24:31.166381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.166395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.166416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.166430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.166451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.166465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.166485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.166499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.166520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.166542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.166564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.166579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.166601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.166615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.166635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.166649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.166670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.166685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.166705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.166719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.166751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.166767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.166788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.166803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.166824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.166838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.166858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.166873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.166893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.166908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.166928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.166942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.166962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.166977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.167985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.167999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.168020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.168034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.168055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.168069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.168089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.168103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.168124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.168138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.168158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.932 [2024-07-15 13:24:31.168172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.168193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.168219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.168242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.168256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.168277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.168291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.168312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.168334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.168357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.168371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.169370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.169398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.169423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.169439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.169460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.169475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.169496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.169511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.169532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.169546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:54.932 [2024-07-15 13:24:31.169567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.932 [2024-07-15 13:24:31.169581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.169602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.169616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.169637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.169651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.169671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.169685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.169706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.169720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.169740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.169766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.169789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.169804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.169824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.169838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.169859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.169873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.169894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.169908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.169928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.169942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.169963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.169978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.169998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.170968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.170988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.171002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.171023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.171037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.171058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.171072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.171093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.171115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.171743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.171769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.171794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.171810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.171831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.171846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.171867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.171881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.171903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.171917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.171938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.171952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.171973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.171987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.172007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.172021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.172042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.172056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.172076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.172091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.172111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.172125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.172146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.933 [2024-07-15 13:24:31.172173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:54.933 [2024-07-15 13:24:31.172196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.172967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.172988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.173977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.173991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.174012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.174027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.174048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.174062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.174082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.174096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.174117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.174131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.174152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.174166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.174187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.174201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.174235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.174249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.174270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.174284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.174305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.174319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.174339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.934 [2024-07-15 13:24:31.174353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:54.934 [2024-07-15 13:24:31.174374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.174395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.174418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.174432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.174453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.935 [2024-07-15 13:24:31.174467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.174488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.174502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.174523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.174537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.174558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.174572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.174593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.174608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.174938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.174966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.175976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.175990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.176963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.176977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.177005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.177019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:31.177159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:31.177179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:48.405935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:48.406038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:48.406103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:116400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:48.406123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:48.406147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:116416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.935 [2024-07-15 13:24:48.406162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:54.935 [2024-07-15 13:24:48.406184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.935 [2024-07-15 13:24:48.406199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.406237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.406253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.406277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.406325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.406350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.406365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.406386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.406400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.406421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.406435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.406456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.406470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.406491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.406504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.406525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.406548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.406568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:116440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.936 [2024-07-15 13:24:48.406582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.406603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:116456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.936 [2024-07-15 13:24:48.406616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.406637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:116472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.936 [2024-07-15 13:24:48.406651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.406672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:116488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.936 [2024-07-15 13:24:48.406685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.407701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.936 [2024-07-15 13:24:48.407730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.407758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.407773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.407817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.407834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.407855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.407869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.407889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.407903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.407927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.407942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.407963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.407977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.407999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.408019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.408054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.408089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.408124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.408159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.408196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.936 [2024-07-15 13:24:48.408250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:116536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.936 [2024-07-15 13:24:48.408298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.936 [2024-07-15 13:24:48.408334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.936 [2024-07-15 13:24:48.408370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.936 [2024-07-15 13:24:48.408405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.936 [2024-07-15 13:24:48.408440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.936 [2024-07-15 13:24:48.408475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:116632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.936 [2024-07-15 13:24:48.408510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.936 [2024-07-15 13:24:48.408546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.408582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.408617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.408652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.936 [2024-07-15 13:24:48.408686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:116672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.936 [2024-07-15 13:24:48.408730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.408767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.408803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.408838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.408873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.408907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.408942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.408977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.408998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.409012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.409032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.409046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.409066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.936 [2024-07-15 13:24:48.409081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.409102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.409116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.409137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.409158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.409181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.936 [2024-07-15 13:24:48.409196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.410356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:116712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.936 [2024-07-15 13:24:48.410385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:54.936 [2024-07-15 13:24:48.410413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.937 [2024-07-15 13:24:48.410429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:54.937 [2024-07-15 13:24:48.410450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:116744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.937 [2024-07-15 13:24:48.410474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.937 [2024-07-15 13:24:48.410496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.937 [2024-07-15 13:24:48.410510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.937 Received shutdown signal, test time was about 35.173417 seconds 00:25:54.937 00:25:54.937 Latency(us) 00:25:54.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.937 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:54.937 Verification LBA range: start 0x0 length 0x4000 00:25:54.937 Nvme0n1 : 35.17 8095.42 31.62 0.00 0.00 15783.00 227.14 4087539.90 00:25:54.937 =================================================================================================================== 00:25:54.937 Total : 8095.42 31.62 0.00 0.00 15783.00 227.14 4087539.90 00:25:54.937 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:55.195 rmmod nvme_tcp 00:25:55.195 rmmod nvme_fabrics 00:25:55.195 rmmod nvme_keyring 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 107626 ']' 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 107626 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 107626 ']' 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 107626 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 107626 00:25:55.195 killing process with pid 107626 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 107626' 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 107626 00:25:55.195 13:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 107626 00:25:55.453 13:24:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:55.453 13:24:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:55.453 13:24:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:55.453 13:24:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:55.453 13:24:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:55.453 13:24:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.453 13:24:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:55.453 13:24:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.453 13:24:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:55.453 00:25:55.453 real 0m41.760s 00:25:55.453 user 2m16.522s 00:25:55.453 sys 0m10.841s 00:25:55.453 13:24:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:55.453 13:24:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:55.453 ************************************ 00:25:55.453 END TEST nvmf_host_multipath_status 00:25:55.453 ************************************ 00:25:55.453 13:24:52 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:55.453 13:24:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:55.453 13:24:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:55.453 13:24:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:55.453 ************************************ 00:25:55.453 START TEST nvmf_discovery_remove_ifc 00:25:55.453 ************************************ 00:25:55.453 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:55.712 * Looking for test storage... 00:25:55.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:55.712 Cannot find device "nvmf_tgt_br" 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:55.712 Cannot find device "nvmf_tgt_br2" 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:55.712 Cannot find device "nvmf_tgt_br" 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:55.712 Cannot find device "nvmf_tgt_br2" 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:55.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:55.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:55.712 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:55.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:25:55.971 00:25:55.971 --- 10.0.0.2 ping statistics --- 00:25:55.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.971 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:55.971 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:55.971 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:25:55.971 00:25:55.971 --- 10.0.0.3 ping statistics --- 00:25:55.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.971 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:55.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:25:55.971 00:25:55.971 --- 10.0.0.1 ping statistics --- 00:25:55.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.971 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=109032 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 109032 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 109032 ']' 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:55.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:55.971 13:24:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:55.971 [2024-07-15 13:24:52.684146] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:55.972 [2024-07-15 13:24:52.684278] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.229 [2024-07-15 13:24:52.817974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.229 [2024-07-15 13:24:52.915828] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:56.229 [2024-07-15 13:24:52.915882] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:56.229 [2024-07-15 13:24:52.915894] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:56.229 [2024-07-15 13:24:52.915902] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:56.229 [2024-07-15 13:24:52.915910] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:56.229 [2024-07-15 13:24:52.915942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.163 13:24:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:57.163 13:24:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:25:57.163 13:24:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:57.163 13:24:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:57.163 13:24:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:57.163 13:24:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:57.163 13:24:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:57.163 13:24:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.163 13:24:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:57.163 [2024-07-15 13:24:53.785926] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:57.163 [2024-07-15 13:24:53.794098] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:57.163 null0 00:25:57.163 [2024-07-15 13:24:53.826013] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.163 13:24:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.163 13:24:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=109082 00:25:57.163 13:24:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 109082 /tmp/host.sock 00:25:57.163 13:24:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 109082 ']' 00:25:57.163 13:24:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:25:57.163 13:24:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:57.163 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:57.163 13:24:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:57.163 13:24:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:57.163 13:24:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:57.163 13:24:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:57.472 [2024-07-15 13:24:53.911367] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:57.472 [2024-07-15 13:24:53.911496] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109082 ] 00:25:57.472 [2024-07-15 13:24:54.054485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.472 [2024-07-15 13:24:54.162285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.416 13:24:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:58.416 13:24:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:25:58.416 13:24:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:58.416 13:24:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:58.416 13:24:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.416 13:24:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.416 13:24:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.416 13:24:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:58.416 13:24:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.416 13:24:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.416 13:24:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.416 13:24:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:58.416 13:24:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.416 13:24:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.346 [2024-07-15 13:24:56.059192] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:59.346 [2024-07-15 13:24:56.059251] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:59.346 [2024-07-15 13:24:56.059271] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:59.604 [2024-07-15 13:24:56.147349] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:59.604 [2024-07-15 13:24:56.210579] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:59.604 [2024-07-15 13:24:56.210668] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:59.604 [2024-07-15 13:24:56.210699] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:59.604 [2024-07-15 13:24:56.210717] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:59.604 [2024-07-15 13:24:56.210759] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:59.604 [2024-07-15 13:24:56.217728] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1f6a170 was disconnected and freed. delete nvme_qpair. 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:59.604 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.862 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:59.862 13:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:00.794 13:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:00.794 13:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.794 13:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:00.794 13:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.794 13:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.794 13:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:00.794 13:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:00.794 13:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.794 13:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:00.794 13:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:01.724 13:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:01.724 13:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.724 13:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:01.724 13:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.724 13:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:01.724 13:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.724 13:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:01.982 13:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.982 13:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:01.982 13:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:02.915 13:24:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:02.915 13:24:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.915 13:24:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:02.915 13:24:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.915 13:24:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:02.915 13:24:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:02.915 13:24:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:02.915 13:24:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.915 13:24:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:02.915 13:24:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:03.855 13:25:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:03.855 13:25:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:03.855 13:25:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:03.855 13:25:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.856 13:25:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:03.856 13:25:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.856 13:25:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:03.856 13:25:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.141 13:25:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:04.141 13:25:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:05.088 13:25:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:05.088 13:25:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:05.088 13:25:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.088 13:25:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:05.088 13:25:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:05.088 13:25:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:05.088 13:25:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:05.088 [2024-07-15 13:25:01.638788] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:05.088 [2024-07-15 13:25:01.638861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.088 [2024-07-15 13:25:01.638878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.088 [2024-07-15 13:25:01.638893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.088 [2024-07-15 13:25:01.638902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.088 [2024-07-15 13:25:01.638912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.088 [2024-07-15 13:25:01.638922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.088 [2024-07-15 13:25:01.638931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.088 [2024-07-15 13:25:01.638941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.088 [2024-07-15 13:25:01.638952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.088 [2024-07-15 13:25:01.638961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.088 [2024-07-15 13:25:01.638971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f45cf0 is same with the state(5) to be set 00:26:05.088 [2024-07-15 13:25:01.648781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f45cf0 (9): Bad file descriptor 00:26:05.088 [2024-07-15 13:25:01.658810] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.088 13:25:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.088 13:25:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:05.088 13:25:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:06.021 13:25:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:06.021 13:25:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:06.021 13:25:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.021 13:25:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.021 13:25:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:06.021 13:25:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:06.021 13:25:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:06.021 [2024-07-15 13:25:02.722234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:06.021 [2024-07-15 13:25:02.722326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f45cf0 with addr=10.0.0.2, port=4420 00:26:06.022 [2024-07-15 13:25:02.722348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f45cf0 is same with the state(5) to be set 00:26:06.022 [2024-07-15 13:25:02.722400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f45cf0 (9): Bad file descriptor 00:26:06.022 [2024-07-15 13:25:02.722831] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:06.022 [2024-07-15 13:25:02.722862] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:06.022 [2024-07-15 13:25:02.722874] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:06.022 [2024-07-15 13:25:02.722885] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:06.022 [2024-07-15 13:25:02.722911] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.022 [2024-07-15 13:25:02.722922] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:06.022 13:25:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.279 13:25:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:06.279 13:25:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:07.212 [2024-07-15 13:25:03.722969] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:07.212 [2024-07-15 13:25:03.723046] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:07.212 [2024-07-15 13:25:03.723059] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:07.212 [2024-07-15 13:25:03.723070] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:07.212 [2024-07-15 13:25:03.723096] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.212 [2024-07-15 13:25:03.723127] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:07.212 [2024-07-15 13:25:03.723195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.212 [2024-07-15 13:25:03.723227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.212 [2024-07-15 13:25:03.723243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.212 [2024-07-15 13:25:03.723254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.212 [2024-07-15 13:25:03.723264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.212 [2024-07-15 13:25:03.723273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.212 [2024-07-15 13:25:03.723283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.212 [2024-07-15 13:25:03.723293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.212 [2024-07-15 13:25:03.723304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.212 [2024-07-15 13:25:03.723313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.212 [2024-07-15 13:25:03.723322] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:07.212 [2024-07-15 13:25:03.723400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f11ff0 (9): Bad file descriptor 00:26:07.212 [2024-07-15 13:25:03.724389] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:07.212 [2024-07-15 13:25:03.724414] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:07.212 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:07.212 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:07.212 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:07.212 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.212 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:07.212 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:07.212 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:07.212 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.212 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:07.212 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:07.212 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:07.212 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:07.212 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:07.212 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:07.212 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.212 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:07.212 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:07.212 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:07.212 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:07.212 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.213 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:07.213 13:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:08.587 13:25:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:08.587 13:25:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.587 13:25:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.587 13:25:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:08.587 13:25:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:08.587 13:25:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:08.587 13:25:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:08.587 13:25:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.587 13:25:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:08.587 13:25:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:09.154 [2024-07-15 13:25:05.730807] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:09.154 [2024-07-15 13:25:05.730858] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:09.154 [2024-07-15 13:25:05.730878] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:09.154 [2024-07-15 13:25:05.816960] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:09.154 [2024-07-15 13:25:05.872323] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:09.154 [2024-07-15 13:25:05.872401] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:09.154 [2024-07-15 13:25:05.872429] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:09.154 [2024-07-15 13:25:05.872448] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:09.154 [2024-07-15 13:25:05.872459] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:09.154 [2024-07-15 13:25:05.879503] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1f43750 was disconnected and freed. delete nvme_qpair. 00:26:09.412 13:25:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:09.412 13:25:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:09.412 13:25:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:09.412 13:25:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.412 13:25:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:09.412 13:25:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:09.412 13:25:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:09.412 13:25:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.412 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:09.412 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:09.412 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 109082 00:26:09.412 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 109082 ']' 00:26:09.412 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 109082 00:26:09.412 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:26:09.412 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:09.412 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 109082 00:26:09.412 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:09.412 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:09.412 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 109082' 00:26:09.412 killing process with pid 109082 00:26:09.412 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 109082 00:26:09.412 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 109082 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:09.671 rmmod nvme_tcp 00:26:09.671 rmmod nvme_fabrics 00:26:09.671 rmmod nvme_keyring 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 109032 ']' 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 109032 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 109032 ']' 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 109032 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 109032 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 109032' 00:26:09.671 killing process with pid 109032 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 109032 00:26:09.671 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 109032 00:26:09.929 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:09.929 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:09.929 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:09.929 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:09.929 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:09.929 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.929 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:09.929 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.929 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:09.929 00:26:09.929 real 0m14.468s 00:26:09.929 user 0m26.017s 00:26:09.929 sys 0m1.668s 00:26:09.929 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:09.929 13:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:09.929 ************************************ 00:26:09.929 END TEST nvmf_discovery_remove_ifc 00:26:09.929 ************************************ 00:26:10.187 13:25:06 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:10.187 13:25:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:10.187 13:25:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:10.187 13:25:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:10.187 ************************************ 00:26:10.187 START TEST nvmf_identify_kernel_target 00:26:10.187 ************************************ 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:10.187 * Looking for test storage... 00:26:10.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:10.187 Cannot find device "nvmf_tgt_br" 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:10.187 Cannot find device "nvmf_tgt_br2" 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:10.187 Cannot find device "nvmf_tgt_br" 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:10.187 Cannot find device "nvmf_tgt_br2" 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:10.187 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:10.445 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:10.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:10.445 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:26:10.445 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:10.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:10.445 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:26:10.445 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:10.445 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:10.445 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:10.445 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:10.445 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:10.445 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:10.445 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:10.445 13:25:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:10.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:10.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:26:10.445 00:26:10.445 --- 10.0.0.2 ping statistics --- 00:26:10.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.445 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:10.445 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:10.445 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:26:10.445 00:26:10.445 --- 10.0.0.3 ping statistics --- 00:26:10.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.445 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:10.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:10.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:26:10.445 00:26:10.445 --- 10.0.0.1 ping statistics --- 00:26:10.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.445 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:10.445 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:11.010 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:11.010 Waiting for block devices as requested 00:26:11.010 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:11.010 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:11.010 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:11.010 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:11.010 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:11.010 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:26:11.010 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:11.010 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:11.010 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:11.010 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:11.010 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:11.268 No valid GPT data, bailing 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:26:11.268 No valid GPT data, bailing 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:26:11.268 No valid GPT data, bailing 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:26:11.268 13:25:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:11.268 No valid GPT data, bailing 00:26:11.268 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:11.528 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:11.528 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:11.528 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:26:11.528 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:26:11.528 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:11.528 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:11.528 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:11.528 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:11.528 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:11.528 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:26:11.528 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:11.528 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:11.528 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:11.528 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:11.528 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:11.528 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:11.528 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -a 10.0.0.1 -t tcp -s 4420 00:26:11.528 00:26:11.528 Discovery Log Number of Records 2, Generation counter 2 00:26:11.528 =====Discovery Log Entry 0====== 00:26:11.528 trtype: tcp 00:26:11.528 adrfam: ipv4 00:26:11.528 subtype: current discovery subsystem 00:26:11.528 treq: not specified, sq flow control disable supported 00:26:11.528 portid: 1 00:26:11.528 trsvcid: 4420 00:26:11.528 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:11.528 traddr: 10.0.0.1 00:26:11.528 eflags: none 00:26:11.528 sectype: none 00:26:11.528 =====Discovery Log Entry 1====== 00:26:11.528 trtype: tcp 00:26:11.528 adrfam: ipv4 00:26:11.528 subtype: nvme subsystem 00:26:11.528 treq: not specified, sq flow control disable supported 00:26:11.528 portid: 1 00:26:11.528 trsvcid: 4420 00:26:11.528 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:11.528 traddr: 10.0.0.1 00:26:11.528 eflags: none 00:26:11.528 sectype: none 00:26:11.528 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:11.528 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:11.528 ===================================================== 00:26:11.528 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:11.528 ===================================================== 00:26:11.528 Controller Capabilities/Features 00:26:11.528 ================================ 00:26:11.528 Vendor ID: 0000 00:26:11.528 Subsystem Vendor ID: 0000 00:26:11.528 Serial Number: 68518d348dda3e60ebb3 00:26:11.528 Model Number: Linux 00:26:11.528 Firmware Version: 6.7.0-68 00:26:11.528 Recommended Arb Burst: 0 00:26:11.528 IEEE OUI Identifier: 00 00 00 00:26:11.528 Multi-path I/O 00:26:11.528 May have multiple subsystem ports: No 00:26:11.528 May have multiple controllers: No 00:26:11.528 Associated with SR-IOV VF: No 00:26:11.528 Max Data Transfer Size: Unlimited 00:26:11.528 Max Number of Namespaces: 0 00:26:11.528 Max Number of I/O Queues: 1024 00:26:11.528 NVMe Specification Version (VS): 1.3 00:26:11.528 NVMe Specification Version (Identify): 1.3 00:26:11.528 Maximum Queue Entries: 1024 00:26:11.528 Contiguous Queues Required: No 00:26:11.528 Arbitration Mechanisms Supported 00:26:11.528 Weighted Round Robin: Not Supported 00:26:11.528 Vendor Specific: Not Supported 00:26:11.528 Reset Timeout: 7500 ms 00:26:11.528 Doorbell Stride: 4 bytes 00:26:11.528 NVM Subsystem Reset: Not Supported 00:26:11.528 Command Sets Supported 00:26:11.528 NVM Command Set: Supported 00:26:11.528 Boot Partition: Not Supported 00:26:11.528 Memory Page Size Minimum: 4096 bytes 00:26:11.528 Memory Page Size Maximum: 4096 bytes 00:26:11.528 Persistent Memory Region: Not Supported 00:26:11.528 Optional Asynchronous Events Supported 00:26:11.528 Namespace Attribute Notices: Not Supported 00:26:11.528 Firmware Activation Notices: Not Supported 00:26:11.528 ANA Change Notices: Not Supported 00:26:11.528 PLE Aggregate Log Change Notices: Not Supported 00:26:11.528 LBA Status Info Alert Notices: Not Supported 00:26:11.528 EGE Aggregate Log Change Notices: Not Supported 00:26:11.528 Normal NVM Subsystem Shutdown event: Not Supported 00:26:11.528 Zone Descriptor Change Notices: Not Supported 00:26:11.528 Discovery Log Change Notices: Supported 00:26:11.528 Controller Attributes 00:26:11.528 128-bit Host Identifier: Not Supported 00:26:11.528 Non-Operational Permissive Mode: Not Supported 00:26:11.528 NVM Sets: Not Supported 00:26:11.528 Read Recovery Levels: Not Supported 00:26:11.528 Endurance Groups: Not Supported 00:26:11.528 Predictable Latency Mode: Not Supported 00:26:11.528 Traffic Based Keep ALive: Not Supported 00:26:11.528 Namespace Granularity: Not Supported 00:26:11.528 SQ Associations: Not Supported 00:26:11.528 UUID List: Not Supported 00:26:11.528 Multi-Domain Subsystem: Not Supported 00:26:11.528 Fixed Capacity Management: Not Supported 00:26:11.528 Variable Capacity Management: Not Supported 00:26:11.528 Delete Endurance Group: Not Supported 00:26:11.528 Delete NVM Set: Not Supported 00:26:11.528 Extended LBA Formats Supported: Not Supported 00:26:11.528 Flexible Data Placement Supported: Not Supported 00:26:11.528 00:26:11.528 Controller Memory Buffer Support 00:26:11.528 ================================ 00:26:11.528 Supported: No 00:26:11.528 00:26:11.528 Persistent Memory Region Support 00:26:11.528 ================================ 00:26:11.528 Supported: No 00:26:11.528 00:26:11.528 Admin Command Set Attributes 00:26:11.528 ============================ 00:26:11.528 Security Send/Receive: Not Supported 00:26:11.528 Format NVM: Not Supported 00:26:11.528 Firmware Activate/Download: Not Supported 00:26:11.528 Namespace Management: Not Supported 00:26:11.528 Device Self-Test: Not Supported 00:26:11.528 Directives: Not Supported 00:26:11.528 NVMe-MI: Not Supported 00:26:11.528 Virtualization Management: Not Supported 00:26:11.528 Doorbell Buffer Config: Not Supported 00:26:11.528 Get LBA Status Capability: Not Supported 00:26:11.528 Command & Feature Lockdown Capability: Not Supported 00:26:11.528 Abort Command Limit: 1 00:26:11.528 Async Event Request Limit: 1 00:26:11.528 Number of Firmware Slots: N/A 00:26:11.528 Firmware Slot 1 Read-Only: N/A 00:26:11.528 Firmware Activation Without Reset: N/A 00:26:11.528 Multiple Update Detection Support: N/A 00:26:11.528 Firmware Update Granularity: No Information Provided 00:26:11.528 Per-Namespace SMART Log: No 00:26:11.528 Asymmetric Namespace Access Log Page: Not Supported 00:26:11.528 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:11.528 Command Effects Log Page: Not Supported 00:26:11.528 Get Log Page Extended Data: Supported 00:26:11.528 Telemetry Log Pages: Not Supported 00:26:11.528 Persistent Event Log Pages: Not Supported 00:26:11.528 Supported Log Pages Log Page: May Support 00:26:11.528 Commands Supported & Effects Log Page: Not Supported 00:26:11.528 Feature Identifiers & Effects Log Page:May Support 00:26:11.528 NVMe-MI Commands & Effects Log Page: May Support 00:26:11.528 Data Area 4 for Telemetry Log: Not Supported 00:26:11.529 Error Log Page Entries Supported: 1 00:26:11.529 Keep Alive: Not Supported 00:26:11.529 00:26:11.529 NVM Command Set Attributes 00:26:11.529 ========================== 00:26:11.529 Submission Queue Entry Size 00:26:11.529 Max: 1 00:26:11.529 Min: 1 00:26:11.529 Completion Queue Entry Size 00:26:11.529 Max: 1 00:26:11.529 Min: 1 00:26:11.529 Number of Namespaces: 0 00:26:11.529 Compare Command: Not Supported 00:26:11.529 Write Uncorrectable Command: Not Supported 00:26:11.529 Dataset Management Command: Not Supported 00:26:11.529 Write Zeroes Command: Not Supported 00:26:11.529 Set Features Save Field: Not Supported 00:26:11.529 Reservations: Not Supported 00:26:11.529 Timestamp: Not Supported 00:26:11.529 Copy: Not Supported 00:26:11.529 Volatile Write Cache: Not Present 00:26:11.529 Atomic Write Unit (Normal): 1 00:26:11.529 Atomic Write Unit (PFail): 1 00:26:11.529 Atomic Compare & Write Unit: 1 00:26:11.529 Fused Compare & Write: Not Supported 00:26:11.529 Scatter-Gather List 00:26:11.529 SGL Command Set: Supported 00:26:11.529 SGL Keyed: Not Supported 00:26:11.529 SGL Bit Bucket Descriptor: Not Supported 00:26:11.529 SGL Metadata Pointer: Not Supported 00:26:11.529 Oversized SGL: Not Supported 00:26:11.529 SGL Metadata Address: Not Supported 00:26:11.529 SGL Offset: Supported 00:26:11.529 Transport SGL Data Block: Not Supported 00:26:11.529 Replay Protected Memory Block: Not Supported 00:26:11.529 00:26:11.529 Firmware Slot Information 00:26:11.529 ========================= 00:26:11.529 Active slot: 0 00:26:11.529 00:26:11.529 00:26:11.529 Error Log 00:26:11.529 ========= 00:26:11.529 00:26:11.529 Active Namespaces 00:26:11.529 ================= 00:26:11.529 Discovery Log Page 00:26:11.529 ================== 00:26:11.529 Generation Counter: 2 00:26:11.529 Number of Records: 2 00:26:11.529 Record Format: 0 00:26:11.529 00:26:11.529 Discovery Log Entry 0 00:26:11.529 ---------------------- 00:26:11.529 Transport Type: 3 (TCP) 00:26:11.529 Address Family: 1 (IPv4) 00:26:11.529 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:11.529 Entry Flags: 00:26:11.529 Duplicate Returned Information: 0 00:26:11.529 Explicit Persistent Connection Support for Discovery: 0 00:26:11.529 Transport Requirements: 00:26:11.529 Secure Channel: Not Specified 00:26:11.529 Port ID: 1 (0x0001) 00:26:11.529 Controller ID: 65535 (0xffff) 00:26:11.529 Admin Max SQ Size: 32 00:26:11.529 Transport Service Identifier: 4420 00:26:11.529 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:11.529 Transport Address: 10.0.0.1 00:26:11.529 Discovery Log Entry 1 00:26:11.529 ---------------------- 00:26:11.529 Transport Type: 3 (TCP) 00:26:11.529 Address Family: 1 (IPv4) 00:26:11.529 Subsystem Type: 2 (NVM Subsystem) 00:26:11.529 Entry Flags: 00:26:11.529 Duplicate Returned Information: 0 00:26:11.529 Explicit Persistent Connection Support for Discovery: 0 00:26:11.529 Transport Requirements: 00:26:11.529 Secure Channel: Not Specified 00:26:11.529 Port ID: 1 (0x0001) 00:26:11.529 Controller ID: 65535 (0xffff) 00:26:11.529 Admin Max SQ Size: 32 00:26:11.529 Transport Service Identifier: 4420 00:26:11.529 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:11.529 Transport Address: 10.0.0.1 00:26:11.529 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:11.787 get_feature(0x01) failed 00:26:11.787 get_feature(0x02) failed 00:26:11.787 get_feature(0x04) failed 00:26:11.787 ===================================================== 00:26:11.787 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:11.787 ===================================================== 00:26:11.787 Controller Capabilities/Features 00:26:11.787 ================================ 00:26:11.787 Vendor ID: 0000 00:26:11.788 Subsystem Vendor ID: 0000 00:26:11.788 Serial Number: b516da4f3970aa3258ba 00:26:11.788 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:11.788 Firmware Version: 6.7.0-68 00:26:11.788 Recommended Arb Burst: 6 00:26:11.788 IEEE OUI Identifier: 00 00 00 00:26:11.788 Multi-path I/O 00:26:11.788 May have multiple subsystem ports: Yes 00:26:11.788 May have multiple controllers: Yes 00:26:11.788 Associated with SR-IOV VF: No 00:26:11.788 Max Data Transfer Size: Unlimited 00:26:11.788 Max Number of Namespaces: 1024 00:26:11.788 Max Number of I/O Queues: 128 00:26:11.788 NVMe Specification Version (VS): 1.3 00:26:11.788 NVMe Specification Version (Identify): 1.3 00:26:11.788 Maximum Queue Entries: 1024 00:26:11.788 Contiguous Queues Required: No 00:26:11.788 Arbitration Mechanisms Supported 00:26:11.788 Weighted Round Robin: Not Supported 00:26:11.788 Vendor Specific: Not Supported 00:26:11.788 Reset Timeout: 7500 ms 00:26:11.788 Doorbell Stride: 4 bytes 00:26:11.788 NVM Subsystem Reset: Not Supported 00:26:11.788 Command Sets Supported 00:26:11.788 NVM Command Set: Supported 00:26:11.788 Boot Partition: Not Supported 00:26:11.788 Memory Page Size Minimum: 4096 bytes 00:26:11.788 Memory Page Size Maximum: 4096 bytes 00:26:11.788 Persistent Memory Region: Not Supported 00:26:11.788 Optional Asynchronous Events Supported 00:26:11.788 Namespace Attribute Notices: Supported 00:26:11.788 Firmware Activation Notices: Not Supported 00:26:11.788 ANA Change Notices: Supported 00:26:11.788 PLE Aggregate Log Change Notices: Not Supported 00:26:11.788 LBA Status Info Alert Notices: Not Supported 00:26:11.788 EGE Aggregate Log Change Notices: Not Supported 00:26:11.788 Normal NVM Subsystem Shutdown event: Not Supported 00:26:11.788 Zone Descriptor Change Notices: Not Supported 00:26:11.788 Discovery Log Change Notices: Not Supported 00:26:11.788 Controller Attributes 00:26:11.788 128-bit Host Identifier: Supported 00:26:11.788 Non-Operational Permissive Mode: Not Supported 00:26:11.788 NVM Sets: Not Supported 00:26:11.788 Read Recovery Levels: Not Supported 00:26:11.788 Endurance Groups: Not Supported 00:26:11.788 Predictable Latency Mode: Not Supported 00:26:11.788 Traffic Based Keep ALive: Supported 00:26:11.788 Namespace Granularity: Not Supported 00:26:11.788 SQ Associations: Not Supported 00:26:11.788 UUID List: Not Supported 00:26:11.788 Multi-Domain Subsystem: Not Supported 00:26:11.788 Fixed Capacity Management: Not Supported 00:26:11.788 Variable Capacity Management: Not Supported 00:26:11.788 Delete Endurance Group: Not Supported 00:26:11.788 Delete NVM Set: Not Supported 00:26:11.788 Extended LBA Formats Supported: Not Supported 00:26:11.788 Flexible Data Placement Supported: Not Supported 00:26:11.788 00:26:11.788 Controller Memory Buffer Support 00:26:11.788 ================================ 00:26:11.788 Supported: No 00:26:11.788 00:26:11.788 Persistent Memory Region Support 00:26:11.788 ================================ 00:26:11.788 Supported: No 00:26:11.788 00:26:11.788 Admin Command Set Attributes 00:26:11.788 ============================ 00:26:11.788 Security Send/Receive: Not Supported 00:26:11.788 Format NVM: Not Supported 00:26:11.788 Firmware Activate/Download: Not Supported 00:26:11.788 Namespace Management: Not Supported 00:26:11.788 Device Self-Test: Not Supported 00:26:11.788 Directives: Not Supported 00:26:11.788 NVMe-MI: Not Supported 00:26:11.788 Virtualization Management: Not Supported 00:26:11.788 Doorbell Buffer Config: Not Supported 00:26:11.788 Get LBA Status Capability: Not Supported 00:26:11.788 Command & Feature Lockdown Capability: Not Supported 00:26:11.788 Abort Command Limit: 4 00:26:11.788 Async Event Request Limit: 4 00:26:11.788 Number of Firmware Slots: N/A 00:26:11.788 Firmware Slot 1 Read-Only: N/A 00:26:11.788 Firmware Activation Without Reset: N/A 00:26:11.788 Multiple Update Detection Support: N/A 00:26:11.788 Firmware Update Granularity: No Information Provided 00:26:11.788 Per-Namespace SMART Log: Yes 00:26:11.788 Asymmetric Namespace Access Log Page: Supported 00:26:11.788 ANA Transition Time : 10 sec 00:26:11.788 00:26:11.788 Asymmetric Namespace Access Capabilities 00:26:11.788 ANA Optimized State : Supported 00:26:11.788 ANA Non-Optimized State : Supported 00:26:11.788 ANA Inaccessible State : Supported 00:26:11.788 ANA Persistent Loss State : Supported 00:26:11.788 ANA Change State : Supported 00:26:11.788 ANAGRPID is not changed : No 00:26:11.788 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:11.788 00:26:11.788 ANA Group Identifier Maximum : 128 00:26:11.788 Number of ANA Group Identifiers : 128 00:26:11.788 Max Number of Allowed Namespaces : 1024 00:26:11.788 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:11.788 Command Effects Log Page: Supported 00:26:11.788 Get Log Page Extended Data: Supported 00:26:11.788 Telemetry Log Pages: Not Supported 00:26:11.788 Persistent Event Log Pages: Not Supported 00:26:11.788 Supported Log Pages Log Page: May Support 00:26:11.788 Commands Supported & Effects Log Page: Not Supported 00:26:11.788 Feature Identifiers & Effects Log Page:May Support 00:26:11.788 NVMe-MI Commands & Effects Log Page: May Support 00:26:11.788 Data Area 4 for Telemetry Log: Not Supported 00:26:11.788 Error Log Page Entries Supported: 128 00:26:11.788 Keep Alive: Supported 00:26:11.788 Keep Alive Granularity: 1000 ms 00:26:11.788 00:26:11.788 NVM Command Set Attributes 00:26:11.788 ========================== 00:26:11.788 Submission Queue Entry Size 00:26:11.788 Max: 64 00:26:11.788 Min: 64 00:26:11.788 Completion Queue Entry Size 00:26:11.788 Max: 16 00:26:11.788 Min: 16 00:26:11.788 Number of Namespaces: 1024 00:26:11.788 Compare Command: Not Supported 00:26:11.788 Write Uncorrectable Command: Not Supported 00:26:11.788 Dataset Management Command: Supported 00:26:11.788 Write Zeroes Command: Supported 00:26:11.788 Set Features Save Field: Not Supported 00:26:11.788 Reservations: Not Supported 00:26:11.788 Timestamp: Not Supported 00:26:11.788 Copy: Not Supported 00:26:11.788 Volatile Write Cache: Present 00:26:11.788 Atomic Write Unit (Normal): 1 00:26:11.788 Atomic Write Unit (PFail): 1 00:26:11.788 Atomic Compare & Write Unit: 1 00:26:11.788 Fused Compare & Write: Not Supported 00:26:11.788 Scatter-Gather List 00:26:11.788 SGL Command Set: Supported 00:26:11.788 SGL Keyed: Not Supported 00:26:11.788 SGL Bit Bucket Descriptor: Not Supported 00:26:11.788 SGL Metadata Pointer: Not Supported 00:26:11.788 Oversized SGL: Not Supported 00:26:11.788 SGL Metadata Address: Not Supported 00:26:11.788 SGL Offset: Supported 00:26:11.788 Transport SGL Data Block: Not Supported 00:26:11.788 Replay Protected Memory Block: Not Supported 00:26:11.788 00:26:11.788 Firmware Slot Information 00:26:11.788 ========================= 00:26:11.788 Active slot: 0 00:26:11.788 00:26:11.788 Asymmetric Namespace Access 00:26:11.788 =========================== 00:26:11.788 Change Count : 0 00:26:11.788 Number of ANA Group Descriptors : 1 00:26:11.788 ANA Group Descriptor : 0 00:26:11.788 ANA Group ID : 1 00:26:11.788 Number of NSID Values : 1 00:26:11.788 Change Count : 0 00:26:11.788 ANA State : 1 00:26:11.788 Namespace Identifier : 1 00:26:11.788 00:26:11.788 Commands Supported and Effects 00:26:11.788 ============================== 00:26:11.788 Admin Commands 00:26:11.788 -------------- 00:26:11.788 Get Log Page (02h): Supported 00:26:11.788 Identify (06h): Supported 00:26:11.788 Abort (08h): Supported 00:26:11.788 Set Features (09h): Supported 00:26:11.788 Get Features (0Ah): Supported 00:26:11.788 Asynchronous Event Request (0Ch): Supported 00:26:11.788 Keep Alive (18h): Supported 00:26:11.788 I/O Commands 00:26:11.788 ------------ 00:26:11.788 Flush (00h): Supported 00:26:11.788 Write (01h): Supported LBA-Change 00:26:11.788 Read (02h): Supported 00:26:11.788 Write Zeroes (08h): Supported LBA-Change 00:26:11.788 Dataset Management (09h): Supported 00:26:11.788 00:26:11.788 Error Log 00:26:11.788 ========= 00:26:11.788 Entry: 0 00:26:11.788 Error Count: 0x3 00:26:11.788 Submission Queue Id: 0x0 00:26:11.788 Command Id: 0x5 00:26:11.788 Phase Bit: 0 00:26:11.788 Status Code: 0x2 00:26:11.788 Status Code Type: 0x0 00:26:11.788 Do Not Retry: 1 00:26:11.788 Error Location: 0x28 00:26:11.788 LBA: 0x0 00:26:11.788 Namespace: 0x0 00:26:11.788 Vendor Log Page: 0x0 00:26:11.788 ----------- 00:26:11.788 Entry: 1 00:26:11.788 Error Count: 0x2 00:26:11.788 Submission Queue Id: 0x0 00:26:11.788 Command Id: 0x5 00:26:11.788 Phase Bit: 0 00:26:11.788 Status Code: 0x2 00:26:11.788 Status Code Type: 0x0 00:26:11.788 Do Not Retry: 1 00:26:11.788 Error Location: 0x28 00:26:11.788 LBA: 0x0 00:26:11.788 Namespace: 0x0 00:26:11.788 Vendor Log Page: 0x0 00:26:11.788 ----------- 00:26:11.788 Entry: 2 00:26:11.788 Error Count: 0x1 00:26:11.788 Submission Queue Id: 0x0 00:26:11.788 Command Id: 0x4 00:26:11.788 Phase Bit: 0 00:26:11.788 Status Code: 0x2 00:26:11.788 Status Code Type: 0x0 00:26:11.788 Do Not Retry: 1 00:26:11.788 Error Location: 0x28 00:26:11.788 LBA: 0x0 00:26:11.788 Namespace: 0x0 00:26:11.788 Vendor Log Page: 0x0 00:26:11.788 00:26:11.788 Number of Queues 00:26:11.788 ================ 00:26:11.788 Number of I/O Submission Queues: 128 00:26:11.788 Number of I/O Completion Queues: 128 00:26:11.788 00:26:11.788 ZNS Specific Controller Data 00:26:11.788 ============================ 00:26:11.788 Zone Append Size Limit: 0 00:26:11.788 00:26:11.788 00:26:11.788 Active Namespaces 00:26:11.788 ================= 00:26:11.788 get_feature(0x05) failed 00:26:11.788 Namespace ID:1 00:26:11.788 Command Set Identifier: NVM (00h) 00:26:11.788 Deallocate: Supported 00:26:11.788 Deallocated/Unwritten Error: Not Supported 00:26:11.788 Deallocated Read Value: Unknown 00:26:11.788 Deallocate in Write Zeroes: Not Supported 00:26:11.788 Deallocated Guard Field: 0xFFFF 00:26:11.788 Flush: Supported 00:26:11.788 Reservation: Not Supported 00:26:11.788 Namespace Sharing Capabilities: Multiple Controllers 00:26:11.788 Size (in LBAs): 1310720 (5GiB) 00:26:11.788 Capacity (in LBAs): 1310720 (5GiB) 00:26:11.788 Utilization (in LBAs): 1310720 (5GiB) 00:26:11.788 UUID: d6ccc07f-d928-4977-b78e-cb948038ae3a 00:26:11.788 Thin Provisioning: Not Supported 00:26:11.788 Per-NS Atomic Units: Yes 00:26:11.788 Atomic Boundary Size (Normal): 0 00:26:11.788 Atomic Boundary Size (PFail): 0 00:26:11.788 Atomic Boundary Offset: 0 00:26:11.788 NGUID/EUI64 Never Reused: No 00:26:11.788 ANA group ID: 1 00:26:11.788 Namespace Write Protected: No 00:26:11.788 Number of LBA Formats: 1 00:26:11.788 Current LBA Format: LBA Format #00 00:26:11.788 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:26:11.788 00:26:11.788 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:11.788 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:11.788 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:11.788 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:11.788 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:11.788 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:11.788 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:11.788 rmmod nvme_tcp 00:26:11.788 rmmod nvme_fabrics 00:26:11.788 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:11.788 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:11.788 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:11.788 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:11.788 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:11.788 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:11.788 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:11.788 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:11.788 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:11.788 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.788 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:11.788 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.046 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:12.046 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:12.046 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:12.046 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:12.046 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:12.046 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:12.046 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:12.046 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:12.046 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:12.046 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:12.046 13:25:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:12.609 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:12.609 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:12.609 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:12.868 ************************************ 00:26:12.868 END TEST nvmf_identify_kernel_target 00:26:12.868 ************************************ 00:26:12.868 00:26:12.868 real 0m2.718s 00:26:12.868 user 0m0.890s 00:26:12.868 sys 0m1.342s 00:26:12.868 13:25:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:12.868 13:25:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:12.868 13:25:09 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:12.868 13:25:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:12.868 13:25:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:12.868 13:25:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:12.868 ************************************ 00:26:12.868 START TEST nvmf_auth_host 00:26:12.868 ************************************ 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:12.868 * Looking for test storage... 00:26:12.868 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:12.868 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:12.869 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:13.126 Cannot find device "nvmf_tgt_br" 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:13.126 Cannot find device "nvmf_tgt_br2" 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:13.126 Cannot find device "nvmf_tgt_br" 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:13.126 Cannot find device "nvmf_tgt_br2" 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:13.126 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:13.126 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:13.126 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:13.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:26:13.383 00:26:13.383 --- 10.0.0.2 ping statistics --- 00:26:13.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.383 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:13.383 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:13.383 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:26:13.383 00:26:13.383 --- 10.0.0.3 ping statistics --- 00:26:13.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.383 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:13.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:26:13.383 00:26:13.383 --- 10.0.0.1 ping statistics --- 00:26:13.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.383 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=109972 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 109972 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 109972 ']' 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:13.383 13:25:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.315 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:14.315 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:26:14.315 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:14.315 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:14.315 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.315 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.315 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:14.315 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:14.316 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:14.316 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:14.316 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:14.316 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:14.316 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:14.316 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:14.316 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=280be938aefa82fd70c4137f0d0d9b20 00:26:14.316 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:14.573 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.P60 00:26:14.573 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 280be938aefa82fd70c4137f0d0d9b20 0 00:26:14.573 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 280be938aefa82fd70c4137f0d0d9b20 0 00:26:14.573 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:14.573 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=280be938aefa82fd70c4137f0d0d9b20 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.P60 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.P60 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.P60 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b76c26558c7770a8ac933f12faa2264a6a012efdcc012de2ce10e4e979e32378 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Oyu 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b76c26558c7770a8ac933f12faa2264a6a012efdcc012de2ce10e4e979e32378 3 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b76c26558c7770a8ac933f12faa2264a6a012efdcc012de2ce10e4e979e32378 3 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b76c26558c7770a8ac933f12faa2264a6a012efdcc012de2ce10e4e979e32378 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Oyu 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Oyu 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Oyu 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a5fe30744e2c871f753ac658163be64dc654b4cd0fc2b042 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.F7s 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a5fe30744e2c871f753ac658163be64dc654b4cd0fc2b042 0 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a5fe30744e2c871f753ac658163be64dc654b4cd0fc2b042 0 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a5fe30744e2c871f753ac658163be64dc654b4cd0fc2b042 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.F7s 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.F7s 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.F7s 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2e7861fa5f3a19fc584142cc040b5784ee779adb866dbef6 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.jqa 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2e7861fa5f3a19fc584142cc040b5784ee779adb866dbef6 2 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2e7861fa5f3a19fc584142cc040b5784ee779adb866dbef6 2 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2e7861fa5f3a19fc584142cc040b5784ee779adb866dbef6 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.jqa 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.jqa 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.jqa 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:14.574 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=00a56f5fbc94baea211b27ea337b8758 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.zrB 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 00a56f5fbc94baea211b27ea337b8758 1 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 00a56f5fbc94baea211b27ea337b8758 1 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=00a56f5fbc94baea211b27ea337b8758 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.zrB 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.zrB 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.zrB 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b5ea9efa264904440f67b267597c7c0a 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8xK 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b5ea9efa264904440f67b267597c7c0a 1 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b5ea9efa264904440f67b267597c7c0a 1 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b5ea9efa264904440f67b267597c7c0a 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8xK 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8xK 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.8xK 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:14.832 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=21631dc8dc5fadbc8357a62380bfb60d70d0120c6b29fa4e 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.9kQ 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 21631dc8dc5fadbc8357a62380bfb60d70d0120c6b29fa4e 2 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 21631dc8dc5fadbc8357a62380bfb60d70d0120c6b29fa4e 2 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=21631dc8dc5fadbc8357a62380bfb60d70d0120c6b29fa4e 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.9kQ 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.9kQ 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.9kQ 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=af548148c969577fd94092452bfc8bcf 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.j7u 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key af548148c969577fd94092452bfc8bcf 0 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 af548148c969577fd94092452bfc8bcf 0 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=af548148c969577fd94092452bfc8bcf 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.j7u 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.j7u 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.j7u 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cd610a43ad70dad7edb775b41088412d9b7314ae5a44e3905940f20ea308a8d0 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.inT 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cd610a43ad70dad7edb775b41088412d9b7314ae5a44e3905940f20ea308a8d0 3 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cd610a43ad70dad7edb775b41088412d9b7314ae5a44e3905940f20ea308a8d0 3 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cd610a43ad70dad7edb775b41088412d9b7314ae5a44e3905940f20ea308a8d0 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:14.833 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:15.091 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.inT 00:26:15.091 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.inT 00:26:15.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.091 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.inT 00:26:15.091 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:15.091 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 109972 00:26:15.091 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 109972 ']' 00:26:15.091 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.091 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:15.091 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.091 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:15.091 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.P60 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Oyu ]] 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Oyu 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.F7s 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.jqa ]] 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jqa 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.zrB 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.8xK ]] 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8xK 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.9kQ 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.j7u ]] 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.j7u 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.inT 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:15.349 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:15.350 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:15.350 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.350 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.350 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.350 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.350 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.350 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.350 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.350 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.350 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.350 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.350 13:25:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:15.350 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:15.350 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:15.350 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:15.350 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:15.350 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:15.350 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:26:15.350 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:15.350 13:25:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:15.350 13:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:15.350 13:25:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:15.607 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:15.607 Waiting for block devices as requested 00:26:15.865 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:15.865 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:16.431 No valid GPT data, bailing 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:26:16.431 13:25:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:26:16.689 No valid GPT data, bailing 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:26:16.689 No valid GPT data, bailing 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:16.689 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:16.690 No valid GPT data, bailing 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -a 10.0.0.1 -t tcp -s 4420 00:26:16.690 00:26:16.690 Discovery Log Number of Records 2, Generation counter 2 00:26:16.690 =====Discovery Log Entry 0====== 00:26:16.690 trtype: tcp 00:26:16.690 adrfam: ipv4 00:26:16.690 subtype: current discovery subsystem 00:26:16.690 treq: not specified, sq flow control disable supported 00:26:16.690 portid: 1 00:26:16.690 trsvcid: 4420 00:26:16.690 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:16.690 traddr: 10.0.0.1 00:26:16.690 eflags: none 00:26:16.690 sectype: none 00:26:16.690 =====Discovery Log Entry 1====== 00:26:16.690 trtype: tcp 00:26:16.690 adrfam: ipv4 00:26:16.690 subtype: nvme subsystem 00:26:16.690 treq: not specified, sq flow control disable supported 00:26:16.690 portid: 1 00:26:16.690 trsvcid: 4420 00:26:16.690 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:16.690 traddr: 10.0.0.1 00:26:16.690 eflags: none 00:26:16.690 sectype: none 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:16.690 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: ]] 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.947 nvme0n1 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.947 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.948 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.948 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.948 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.948 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: ]] 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.205 nvme0n1 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: ]] 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.205 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:17.206 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.206 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.206 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.206 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.206 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.206 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.206 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.206 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.206 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.206 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.206 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.206 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.206 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.206 13:25:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.206 13:25:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:17.206 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.206 13:25:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.464 nvme0n1 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: ]] 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.464 nvme0n1 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.464 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: ]] 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.722 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.723 nvme0n1 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.723 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.991 nvme0n1 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.991 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:18.259 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:18.259 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: ]] 00:26:18.259 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:18.259 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:18.259 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.259 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:18.259 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:18.259 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:18.259 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.259 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:18.259 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.259 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.259 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.259 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.260 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.260 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.260 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.260 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.260 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.260 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.260 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.260 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.260 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.260 13:25:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.260 13:25:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:18.260 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.260 13:25:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.519 nvme0n1 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: ]] 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.519 nvme0n1 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.519 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: ]] 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.778 nvme0n1 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: ]] 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.778 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.037 nvme0n1 00:26:19.037 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.037 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.037 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.037 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.037 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.037 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.037 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.037 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.037 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.037 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.037 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.037 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.038 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.296 nvme0n1 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:19.296 13:25:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: ]] 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.863 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.121 nvme0n1 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: ]] 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.121 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.379 nvme0n1 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: ]] 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.379 13:25:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.637 nvme0n1 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: ]] 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:20.637 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.638 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:20.638 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.638 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.638 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.638 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.638 13:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.638 13:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.638 13:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.638 13:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.638 13:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.638 13:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.638 13:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.638 13:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.638 13:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.638 13:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.638 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:20.638 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.638 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.896 nvme0n1 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.896 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.154 nvme0n1 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:21.154 13:25:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: ]] 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.053 13:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.312 nvme0n1 00:26:23.312 13:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.312 13:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.312 13:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.312 13:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.312 13:25:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.312 13:25:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: ]] 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.312 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.880 nvme0n1 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: ]] 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.880 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.139 nvme0n1 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: ]] 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.139 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.396 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.396 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.396 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.396 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.396 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.396 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.396 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.396 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.396 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.396 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.396 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.396 13:25:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.396 13:25:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:24.396 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.396 13:25:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.655 nvme0n1 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.655 13:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.656 13:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.656 13:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.656 13:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.656 13:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.656 13:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.656 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:24.656 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.656 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.914 nvme0n1 00:26:24.914 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.914 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.914 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.914 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.914 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: ]] 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.173 13:25:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.740 nvme0n1 00:26:25.740 13:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.740 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.740 13:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: ]] 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.741 13:25:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.308 nvme0n1 00:26:26.308 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.308 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.308 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.308 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.308 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.308 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.566 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: ]] 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.567 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.132 nvme0n1 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: ]] 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.132 13:25:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.698 nvme0n1 00:26:27.698 13:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.698 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.698 13:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.698 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.698 13:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.698 13:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.698 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.698 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.698 13:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.698 13:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.956 13:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.956 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.956 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.957 13:25:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.523 nvme0n1 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: ]] 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.524 nvme0n1 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.524 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: ]] 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.781 nvme0n1 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:28.781 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: ]] 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.782 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.040 nvme0n1 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: ]] 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.040 nvme0n1 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.040 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.299 nvme0n1 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.299 13:25:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.299 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.299 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:29.299 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.299 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:29.299 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.299 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:29.299 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:29.299 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:29.299 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:29.299 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:29.299 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:29.299 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:29.299 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:29.299 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: ]] 00:26:29.299 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:29.299 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:29.299 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.299 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:29.300 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:29.300 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:29.300 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.300 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:29.300 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.300 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.300 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.300 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.300 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.300 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.300 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.300 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.300 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.300 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.300 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.300 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.300 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.300 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.300 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:29.300 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.300 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.558 nvme0n1 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: ]] 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.558 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.559 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.559 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.559 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.816 nvme0n1 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: ]] 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.816 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.817 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.817 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.817 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.817 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.817 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.817 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.817 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.817 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.817 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.817 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:29.817 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.817 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.073 nvme0n1 00:26:30.073 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.073 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.073 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.073 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.073 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.073 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.073 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: ]] 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.074 nvme0n1 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.074 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.332 nvme0n1 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.332 13:25:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: ]] 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.332 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.589 nvme0n1 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: ]] 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:30.589 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:30.590 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.590 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:30.590 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:30.590 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:30.590 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.590 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:30.590 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.590 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.590 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.590 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.590 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.590 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.590 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.590 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.590 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.590 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.590 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.846 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.846 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.846 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.846 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:30.846 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.846 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.846 nvme0n1 00:26:30.846 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.846 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.846 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.846 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.846 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: ]] 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.847 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.105 nvme0n1 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: ]] 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.105 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.363 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.363 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.363 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.363 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.363 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.363 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.364 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.364 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.364 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.364 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.364 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.364 13:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.364 13:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:31.364 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.364 13:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.364 nvme0n1 00:26:31.364 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.364 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.364 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.364 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.364 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.364 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.364 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.364 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.364 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.364 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.364 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.621 nvme0n1 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.621 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: ]] 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.879 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.170 nvme0n1 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: ]] 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.170 13:25:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.449 nvme0n1 00:26:32.449 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.449 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.449 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.449 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.449 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.449 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: ]] 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.708 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.967 nvme0n1 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: ]] 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.967 13:25:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.532 nvme0n1 00:26:33.532 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.532 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.532 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.532 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.532 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.532 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.532 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.532 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.532 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.532 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.532 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.532 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.532 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:33.532 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.532 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:33.532 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:33.532 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:33.532 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:33.532 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:33.532 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:33.532 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.533 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.791 nvme0n1 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: ]] 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.791 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.049 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.049 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.049 13:25:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.049 13:25:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.049 13:25:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.049 13:25:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.049 13:25:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.049 13:25:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:34.049 13:25:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.049 13:25:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:34.049 13:25:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:34.049 13:25:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:34.049 13:25:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:34.049 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.049 13:25:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.616 nvme0n1 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: ]] 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.616 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.183 nvme0n1 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: ]] 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.183 13:25:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.749 nvme0n1 00:26:35.749 13:25:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.749 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.749 13:25:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.749 13:25:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.749 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: ]] 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.008 13:25:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.574 nvme0n1 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.574 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.139 nvme0n1 00:26:37.139 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.139 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.139 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.139 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.139 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.139 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: ]] 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.398 13:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.399 13:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:37.399 13:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.399 13:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:37.399 13:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:37.399 13:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:37.399 13:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:37.399 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.399 13:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.399 nvme0n1 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: ]] 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.399 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.657 nvme0n1 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:37.657 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: ]] 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.658 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.916 nvme0n1 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: ]] 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.917 nvme0n1 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.917 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.176 nvme0n1 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: ]] 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.176 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.435 nvme0n1 00:26:38.435 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.435 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.435 13:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.435 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.435 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.435 13:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: ]] 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.435 nvme0n1 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.435 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: ]] 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.694 nvme0n1 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.694 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: ]] 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.954 nvme0n1 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.954 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.213 nvme0n1 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: ]] 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.213 13:25:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.472 nvme0n1 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: ]] 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.472 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.735 nvme0n1 00:26:39.735 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.735 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: ]] 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.736 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.004 nvme0n1 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: ]] 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.004 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.263 nvme0n1 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.263 13:25:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.521 nvme0n1 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: ]] 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.521 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.087 nvme0n1 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: ]] 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.088 13:25:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.354 nvme0n1 00:26:41.354 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.354 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.354 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.354 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.354 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.354 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.354 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.354 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.354 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.354 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: ]] 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.611 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.869 nvme0n1 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: ]] 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.869 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.435 nvme0n1 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.435 13:25:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.693 nvme0n1 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjgwYmU5MzhhZWZhODJmZDcwYzQxMzdmMGQwZDliMjB0sihb: 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: ]] 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2YzI2NTU4Yzc3NzBhOGFjOTMzZjEyZmFhMjI2NGE2YTAxMmVmZGNjMDEyZGUyY2UxMGU0ZTk3OWUzMjM3OGp1SzA=: 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.693 13:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.626 nvme0n1 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: ]] 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.626 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.192 nvme0n1 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDBhNTZmNWZiYzk0YmFlYTIxMWIyN2VhMzM3Yjg3NThot4LD: 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: ]] 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjVlYTllZmEyNjQ5MDQ0NDBmNjdiMjY3NTk3YzdjMGEoJprA: 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.192 13:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.757 nvme0n1 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2MzFkYzhkYzVmYWRiYzgzNTdhNjIzODBiZmI2MGQ3MGQwMTIwYzZiMjlmYTRlic7jDA==: 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: ]] 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1NDgxNDhjOTY5NTc3ZmQ5NDA5MjQ1MmJmYzhiY2Y2wgYh: 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.757 13:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.015 13:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.015 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.015 13:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:45.015 13:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:45.015 13:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:45.015 13:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.015 13:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.015 13:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:45.015 13:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.015 13:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:45.015 13:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:45.015 13:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:45.015 13:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:45.015 13:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.015 13:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.581 nvme0n1 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2Q2MTBhNDNhZDcwZGFkN2VkYjc3NWI0MTA4ODQxMmQ5YjczMTRhZTVhNDRlMzkwNTk0MGYyMGVhMzA4YThkMMkbgZ0=: 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.581 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.147 nvme0n1 00:26:46.147 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.147 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.147 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.147 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.147 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.147 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.147 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.147 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.147 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.147 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVmZTMwNzQ0ZTJjODcxZjc1M2FjNjU4MTYzYmU2NGRjNjU0YjRjZDBmYzJiMDQyWrJITA==: 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: ]] 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3ODYxZmE1ZjNhMTlmYzU4NDE0MmNjMDQwYjU3ODRlZTc3OWFkYjg2NmRiZWY2HgH+kQ==: 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.405 2024/07/15 13:25:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:26:46.405 request: 00:26:46.405 { 00:26:46.405 "method": "bdev_nvme_attach_controller", 00:26:46.405 "params": { 00:26:46.405 "name": "nvme0", 00:26:46.405 "trtype": "tcp", 00:26:46.405 "traddr": "10.0.0.1", 00:26:46.405 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:46.405 "adrfam": "ipv4", 00:26:46.405 "trsvcid": "4420", 00:26:46.405 "subnqn": "nqn.2024-02.io.spdk:cnode0" 00:26:46.405 } 00:26:46.405 } 00:26:46.405 Got JSON-RPC error response 00:26:46.405 GoRPCClient: error on JSON-RPC call 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.405 13:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:46.405 13:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:46.405 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:46.405 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:46.405 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:46.405 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.405 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.405 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:46.405 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.405 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:46.405 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:46.405 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:46.405 13:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:46.405 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.406 2024/07/15 13:25:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:26:46.406 request: 00:26:46.406 { 00:26:46.406 "method": "bdev_nvme_attach_controller", 00:26:46.406 "params": { 00:26:46.406 "name": "nvme0", 00:26:46.406 "trtype": "tcp", 00:26:46.406 "traddr": "10.0.0.1", 00:26:46.406 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:46.406 "adrfam": "ipv4", 00:26:46.406 "trsvcid": "4420", 00:26:46.406 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:46.406 "dhchap_key": "key2" 00:26:46.406 } 00:26:46.406 } 00:26:46.406 Got JSON-RPC error response 00:26:46.406 GoRPCClient: error on JSON-RPC call 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.406 2024/07/15 13:25:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_ctrlr_key:ckey2 dhchap_key:key1 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:26:46.406 request: 00:26:46.406 { 00:26:46.406 "method": "bdev_nvme_attach_controller", 00:26:46.406 "params": { 00:26:46.406 "name": "nvme0", 00:26:46.406 "trtype": "tcp", 00:26:46.406 "traddr": "10.0.0.1", 00:26:46.406 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:46.406 "adrfam": "ipv4", 00:26:46.406 "trsvcid": "4420", 00:26:46.406 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:46.406 "dhchap_key": "key1", 00:26:46.406 "dhchap_ctrlr_key": "ckey2" 00:26:46.406 } 00:26:46.406 } 00:26:46.406 Got JSON-RPC error response 00:26:46.406 GoRPCClient: error on JSON-RPC call 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:46.406 13:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:46.668 rmmod nvme_tcp 00:26:46.668 rmmod nvme_fabrics 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 109972 ']' 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 109972 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 109972 ']' 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 109972 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 109972 00:26:46.668 killing process with pid 109972 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 109972' 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 109972 00:26:46.668 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 109972 00:26:46.963 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:46.963 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:46.963 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:46.963 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:46.963 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:46.963 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.963 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:46.963 13:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.963 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:46.963 13:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:46.963 13:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:46.963 13:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:46.963 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:46.963 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:46.963 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:46.963 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:46.963 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:46.963 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:46.963 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:46.963 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:46.963 13:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:47.527 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:47.784 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:47.784 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:47.784 13:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.P60 /tmp/spdk.key-null.F7s /tmp/spdk.key-sha256.zrB /tmp/spdk.key-sha384.9kQ /tmp/spdk.key-sha512.inT /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:26:47.784 13:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:48.047 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:48.047 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:48.047 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:48.306 00:26:48.306 real 0m35.351s 00:26:48.306 user 0m31.771s 00:26:48.306 sys 0m3.788s 00:26:48.306 13:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:48.306 ************************************ 00:26:48.306 END TEST nvmf_auth_host 00:26:48.306 13:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.306 ************************************ 00:26:48.306 13:25:44 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:26:48.306 13:25:44 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:48.306 13:25:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:48.306 13:25:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:48.306 13:25:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:48.306 ************************************ 00:26:48.306 START TEST nvmf_digest 00:26:48.306 ************************************ 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:48.306 * Looking for test storage... 00:26:48.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:48.306 13:25:44 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:48.307 13:25:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:48.307 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:48.307 Cannot find device "nvmf_tgt_br" 00:26:48.307 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:26:48.307 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:48.307 Cannot find device "nvmf_tgt_br2" 00:26:48.307 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:26:48.307 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:48.307 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:48.307 Cannot find device "nvmf_tgt_br" 00:26:48.307 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:26:48.307 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:48.565 Cannot find device "nvmf_tgt_br2" 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:48.565 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:48.565 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:48.565 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:48.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:26:48.823 00:26:48.823 --- 10.0.0.2 ping statistics --- 00:26:48.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.823 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:48.823 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:48.823 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:26:48.823 00:26:48.823 --- 10.0.0.3 ping statistics --- 00:26:48.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.823 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:48.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:26:48.823 00:26:48.823 --- 10.0.0.1 ping statistics --- 00:26:48.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.823 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:48.823 ************************************ 00:26:48.823 START TEST nvmf_digest_clean 00:26:48.823 ************************************ 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=111551 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 111551 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 111551 ']' 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:48.823 13:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.823 [2024-07-15 13:25:45.414481] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:48.823 [2024-07-15 13:25:45.414582] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.823 [2024-07-15 13:25:45.555961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.079 [2024-07-15 13:25:45.678019] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:49.079 [2024-07-15 13:25:45.678105] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:49.079 [2024-07-15 13:25:45.678118] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:49.079 [2024-07-15 13:25:45.678128] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:49.079 [2024-07-15 13:25:45.678135] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:49.079 [2024-07-15 13:25:45.678170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:50.014 null0 00:26:50.014 [2024-07-15 13:25:46.555763] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.014 [2024-07-15 13:25:46.579903] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=111607 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 111607 /var/tmp/bperf.sock 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 111607 ']' 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:50.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:50.014 13:25:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:50.014 [2024-07-15 13:25:46.644133] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:50.014 [2024-07-15 13:25:46.644276] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111607 ] 00:26:50.272 [2024-07-15 13:25:46.785777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.273 [2024-07-15 13:25:46.892708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.207 13:25:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:51.207 13:25:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:26:51.207 13:25:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:51.207 13:25:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:51.207 13:25:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:51.469 13:25:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:51.469 13:25:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:51.726 nvme0n1 00:26:51.726 13:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:51.726 13:25:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:51.726 Running I/O for 2 seconds... 00:26:54.254 00:26:54.254 Latency(us) 00:26:54.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.254 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:54.254 nvme0n1 : 2.00 18649.03 72.85 0.00 0.00 6855.11 3753.43 16681.89 00:26:54.254 =================================================================================================================== 00:26:54.254 Total : 18649.03 72.85 0.00 0.00 6855.11 3753.43 16681.89 00:26:54.254 0 00:26:54.254 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:54.254 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:54.254 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:54.254 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:54.254 | select(.opcode=="crc32c") 00:26:54.254 | "\(.module_name) \(.executed)"' 00:26:54.254 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:54.254 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:54.254 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:54.254 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:54.254 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:54.254 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 111607 00:26:54.254 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 111607 ']' 00:26:54.254 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 111607 00:26:54.254 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:26:54.254 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:54.254 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111607 00:26:54.254 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:54.254 killing process with pid 111607 00:26:54.254 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:54.254 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111607' 00:26:54.254 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 111607 00:26:54.254 Received shutdown signal, test time was about 2.000000 seconds 00:26:54.254 00:26:54.254 Latency(us) 00:26:54.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.254 =================================================================================================================== 00:26:54.254 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:54.254 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 111607 00:26:54.512 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:54.512 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:54.512 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:54.512 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:54.512 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:54.512 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:54.512 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:54.512 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=111692 00:26:54.512 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 111692 /var/tmp/bperf.sock 00:26:54.512 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 111692 ']' 00:26:54.512 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:54.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:54.512 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:54.512 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:54.512 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:54.512 13:25:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:54.512 13:25:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:54.512 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:54.512 Zero copy mechanism will not be used. 00:26:54.512 [2024-07-15 13:25:51.049298] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:54.512 [2024-07-15 13:25:51.049406] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111692 ] 00:26:54.512 [2024-07-15 13:25:51.185507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.770 [2024-07-15 13:25:51.294331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.704 13:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:55.704 13:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:26:55.704 13:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:55.704 13:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:55.704 13:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:55.704 13:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:55.704 13:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:56.270 nvme0n1 00:26:56.270 13:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:56.270 13:25:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:56.270 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:56.270 Zero copy mechanism will not be used. 00:26:56.270 Running I/O for 2 seconds... 00:26:58.168 00:26:58.168 Latency(us) 00:26:58.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.168 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:58.168 nvme0n1 : 2.00 6297.27 787.16 0.00 0.00 2536.68 662.81 7089.80 00:26:58.168 =================================================================================================================== 00:26:58.168 Total : 6297.27 787.16 0.00 0.00 2536.68 662.81 7089.80 00:26:58.168 0 00:26:58.168 13:25:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:58.168 13:25:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:58.168 13:25:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:58.168 13:25:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:58.168 13:25:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:58.168 | select(.opcode=="crc32c") 00:26:58.168 | "\(.module_name) \(.executed)"' 00:26:58.425 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:58.425 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:58.425 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:58.425 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:58.425 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 111692 00:26:58.425 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 111692 ']' 00:26:58.425 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 111692 00:26:58.425 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:26:58.425 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:58.425 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111692 00:26:58.683 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:58.683 killing process with pid 111692 00:26:58.683 Received shutdown signal, test time was about 2.000000 seconds 00:26:58.683 00:26:58.683 Latency(us) 00:26:58.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.683 =================================================================================================================== 00:26:58.683 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:58.683 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:58.683 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111692' 00:26:58.683 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 111692 00:26:58.683 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 111692 00:26:58.683 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:58.683 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:58.683 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:58.683 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:58.683 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:58.684 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:58.684 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:58.684 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=111781 00:26:58.684 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:58.684 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 111781 /var/tmp/bperf.sock 00:26:58.684 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 111781 ']' 00:26:58.684 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:58.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:58.684 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:58.684 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:58.684 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:58.684 13:25:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:58.941 [2024-07-15 13:25:55.445317] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:58.941 [2024-07-15 13:25:55.445425] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111781 ] 00:26:58.941 [2024-07-15 13:25:55.580299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.198 [2024-07-15 13:25:55.684126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.761 13:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:59.761 13:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:26:59.761 13:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:59.761 13:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:59.761 13:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:00.018 13:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:00.018 13:25:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:00.583 nvme0n1 00:27:00.583 13:25:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:00.583 13:25:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:00.583 Running I/O for 2 seconds... 00:27:02.483 00:27:02.483 Latency(us) 00:27:02.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.483 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:02.483 nvme0n1 : 2.01 22145.91 86.51 0.00 0.00 5770.44 2457.60 8877.15 00:27:02.483 =================================================================================================================== 00:27:02.483 Total : 22145.91 86.51 0.00 0.00 5770.44 2457.60 8877.15 00:27:02.483 0 00:27:02.483 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:02.483 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:02.483 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:02.483 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:02.483 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:02.483 | select(.opcode=="crc32c") 00:27:02.483 | "\(.module_name) \(.executed)"' 00:27:03.051 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:03.051 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:03.051 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:03.051 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:03.051 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 111781 00:27:03.051 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 111781 ']' 00:27:03.051 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 111781 00:27:03.051 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:03.051 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:03.051 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111781 00:27:03.051 killing process with pid 111781 00:27:03.051 Received shutdown signal, test time was about 2.000000 seconds 00:27:03.051 00:27:03.051 Latency(us) 00:27:03.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.052 =================================================================================================================== 00:27:03.052 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:03.052 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:03.052 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:03.052 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111781' 00:27:03.052 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 111781 00:27:03.052 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 111781 00:27:03.052 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:03.052 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:03.052 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:03.052 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:03.052 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:03.052 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:03.052 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:03.052 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=111873 00:27:03.052 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:03.052 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 111873 /var/tmp/bperf.sock 00:27:03.052 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 111873 ']' 00:27:03.052 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:03.052 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:03.052 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:03.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:03.052 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:03.052 13:25:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:03.310 [2024-07-15 13:25:59.798058] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:03.310 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:03.310 Zero copy mechanism will not be used. 00:27:03.310 [2024-07-15 13:25:59.800284] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111873 ] 00:27:03.310 [2024-07-15 13:25:59.931933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.310 [2024-07-15 13:26:00.034598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.571 13:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:03.571 13:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:03.571 13:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:03.571 13:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:03.571 13:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:03.830 13:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:03.830 13:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.396 nvme0n1 00:27:04.396 13:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:04.396 13:26:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:04.396 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:04.396 Zero copy mechanism will not be used. 00:27:04.396 Running I/O for 2 seconds... 00:27:06.351 00:27:06.351 Latency(us) 00:27:06.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.351 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:06.351 nvme0n1 : 2.00 6155.07 769.38 0.00 0.00 2592.47 1980.97 7923.90 00:27:06.351 =================================================================================================================== 00:27:06.351 Total : 6155.07 769.38 0.00 0.00 2592.47 1980.97 7923.90 00:27:06.351 0 00:27:06.351 13:26:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:06.351 13:26:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:06.351 13:26:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:06.351 13:26:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:06.351 13:26:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:06.351 | select(.opcode=="crc32c") 00:27:06.351 | "\(.module_name) \(.executed)"' 00:27:06.625 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:06.625 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:06.625 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:06.625 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:06.625 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 111873 00:27:06.625 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 111873 ']' 00:27:06.625 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 111873 00:27:06.625 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:06.625 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:06.625 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111873 00:27:06.625 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:06.625 killing process with pid 111873 00:27:06.625 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:06.625 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111873' 00:27:06.625 Received shutdown signal, test time was about 2.000000 seconds 00:27:06.625 00:27:06.625 Latency(us) 00:27:06.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.625 =================================================================================================================== 00:27:06.625 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:06.625 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 111873 00:27:06.625 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 111873 00:27:06.883 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 111551 00:27:06.883 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 111551 ']' 00:27:06.883 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 111551 00:27:06.883 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:06.883 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:06.883 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111551 00:27:06.883 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:06.883 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:06.883 killing process with pid 111551 00:27:06.883 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111551' 00:27:06.883 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 111551 00:27:06.883 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 111551 00:27:07.142 00:27:07.142 real 0m18.409s 00:27:07.142 user 0m35.177s 00:27:07.142 sys 0m4.685s 00:27:07.142 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:07.142 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:07.142 ************************************ 00:27:07.142 END TEST nvmf_digest_clean 00:27:07.142 ************************************ 00:27:07.142 13:26:03 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:07.142 13:26:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:07.142 13:26:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:07.142 13:26:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:07.142 ************************************ 00:27:07.142 START TEST nvmf_digest_error 00:27:07.142 ************************************ 00:27:07.142 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:27:07.142 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:07.142 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:07.142 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:07.142 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:07.142 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=111973 00:27:07.142 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 111973 00:27:07.142 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:07.142 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 111973 ']' 00:27:07.142 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.142 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:07.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.142 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.142 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:07.142 13:26:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:07.142 [2024-07-15 13:26:03.864973] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:07.142 [2024-07-15 13:26:03.865058] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.400 [2024-07-15 13:26:04.000486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.400 [2024-07-15 13:26:04.103017] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.400 [2024-07-15 13:26:04.103079] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.400 [2024-07-15 13:26:04.103090] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.400 [2024-07-15 13:26:04.103100] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.400 [2024-07-15 13:26:04.103107] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.400 [2024-07-15 13:26:04.103134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.334 13:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:08.334 13:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:08.334 13:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:08.334 13:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:08.334 13:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:08.334 13:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:08.334 13:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:08.334 13:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.334 13:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:08.334 [2024-07-15 13:26:04.923680] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:08.334 13:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.334 13:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:08.334 13:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:08.334 13:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.334 13:26:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:08.334 null0 00:27:08.334 [2024-07-15 13:26:05.037498] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.334 [2024-07-15 13:26:05.061614] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.334 13:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.334 13:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:08.334 13:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:08.334 13:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:08.334 13:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:08.334 13:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:08.334 13:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112017 00:27:08.334 13:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112017 /var/tmp/bperf.sock 00:27:08.334 13:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:08.334 13:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 112017 ']' 00:27:08.334 13:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:08.334 13:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:08.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:08.334 13:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:08.334 13:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:08.334 13:26:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:08.592 [2024-07-15 13:26:05.137022] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:08.592 [2024-07-15 13:26:05.137154] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112017 ] 00:27:08.592 [2024-07-15 13:26:05.281597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.850 [2024-07-15 13:26:05.383602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.416 13:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:09.416 13:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:09.416 13:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:09.416 13:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:09.672 13:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:09.672 13:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.672 13:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:09.672 13:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.672 13:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:09.672 13:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:10.238 nvme0n1 00:27:10.238 13:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:10.238 13:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.238 13:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:10.238 13:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.238 13:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:10.238 13:26:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:10.238 Running I/O for 2 seconds... 00:27:10.238 [2024-07-15 13:26:06.855017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.238 [2024-07-15 13:26:06.855107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.238 [2024-07-15 13:26:06.855132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.238 [2024-07-15 13:26:06.868394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.238 [2024-07-15 13:26:06.868460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.238 [2024-07-15 13:26:06.868485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.238 [2024-07-15 13:26:06.881721] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.238 [2024-07-15 13:26:06.881775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.238 [2024-07-15 13:26:06.881798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.238 [2024-07-15 13:26:06.896521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.238 [2024-07-15 13:26:06.896589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.238 [2024-07-15 13:26:06.896627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.238 [2024-07-15 13:26:06.909617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.238 [2024-07-15 13:26:06.909671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.238 [2024-07-15 13:26:06.909695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.238 [2024-07-15 13:26:06.923174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.238 [2024-07-15 13:26:06.923251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.238 [2024-07-15 13:26:06.923275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.238 [2024-07-15 13:26:06.936931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.238 [2024-07-15 13:26:06.936983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.238 [2024-07-15 13:26:06.937007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.238 [2024-07-15 13:26:06.949352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.238 [2024-07-15 13:26:06.949408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.238 [2024-07-15 13:26:06.949431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.238 [2024-07-15 13:26:06.964124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.239 [2024-07-15 13:26:06.964178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.239 [2024-07-15 13:26:06.964202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.239 [2024-07-15 13:26:06.977437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.239 [2024-07-15 13:26:06.977497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.496 [2024-07-15 13:26:06.977521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.496 [2024-07-15 13:26:06.990095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.496 [2024-07-15 13:26:06.990150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.496 [2024-07-15 13:26:06.990172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.496 [2024-07-15 13:26:07.004641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.496 [2024-07-15 13:26:07.004703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.496 [2024-07-15 13:26:07.004742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.496 [2024-07-15 13:26:07.018633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.496 [2024-07-15 13:26:07.018687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.496 [2024-07-15 13:26:07.018710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.496 [2024-07-15 13:26:07.032366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.496 [2024-07-15 13:26:07.032421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.497 [2024-07-15 13:26:07.032444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.497 [2024-07-15 13:26:07.044622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.497 [2024-07-15 13:26:07.044678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.497 [2024-07-15 13:26:07.044701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.497 [2024-07-15 13:26:07.058964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.497 [2024-07-15 13:26:07.059030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.497 [2024-07-15 13:26:07.059054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.497 [2024-07-15 13:26:07.072830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.497 [2024-07-15 13:26:07.072893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.497 [2024-07-15 13:26:07.072916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.497 [2024-07-15 13:26:07.088419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.497 [2024-07-15 13:26:07.088499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.497 [2024-07-15 13:26:07.088524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.497 [2024-07-15 13:26:07.102816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.497 [2024-07-15 13:26:07.102888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.497 [2024-07-15 13:26:07.102911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.497 [2024-07-15 13:26:07.114784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.497 [2024-07-15 13:26:07.114847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.497 [2024-07-15 13:26:07.114871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.497 [2024-07-15 13:26:07.128806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.497 [2024-07-15 13:26:07.128866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.497 [2024-07-15 13:26:07.128890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.497 [2024-07-15 13:26:07.142781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.497 [2024-07-15 13:26:07.142852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.497 [2024-07-15 13:26:07.142876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.497 [2024-07-15 13:26:07.157342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.497 [2024-07-15 13:26:07.157408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.497 [2024-07-15 13:26:07.157430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.497 [2024-07-15 13:26:07.169363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.497 [2024-07-15 13:26:07.169423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.497 [2024-07-15 13:26:07.169447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.497 [2024-07-15 13:26:07.183841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.497 [2024-07-15 13:26:07.183912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.497 [2024-07-15 13:26:07.183935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.497 [2024-07-15 13:26:07.196838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.497 [2024-07-15 13:26:07.196905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.497 [2024-07-15 13:26:07.196929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.497 [2024-07-15 13:26:07.209313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.497 [2024-07-15 13:26:07.209374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.497 [2024-07-15 13:26:07.209398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.497 [2024-07-15 13:26:07.223523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.497 [2024-07-15 13:26:07.223583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.497 [2024-07-15 13:26:07.223608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.755 [2024-07-15 13:26:07.237687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.755 [2024-07-15 13:26:07.237752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-07-15 13:26:07.237775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.755 [2024-07-15 13:26:07.252831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.755 [2024-07-15 13:26:07.252891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-07-15 13:26:07.252914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.755 [2024-07-15 13:26:07.267472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.755 [2024-07-15 13:26:07.267538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-07-15 13:26:07.267562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.755 [2024-07-15 13:26:07.281604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.755 [2024-07-15 13:26:07.281663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-07-15 13:26:07.281687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.755 [2024-07-15 13:26:07.293137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.755 [2024-07-15 13:26:07.293200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-07-15 13:26:07.293240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.755 [2024-07-15 13:26:07.306707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.755 [2024-07-15 13:26:07.306775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-07-15 13:26:07.306800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.755 [2024-07-15 13:26:07.319179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.755 [2024-07-15 13:26:07.319261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-07-15 13:26:07.319286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.756 [2024-07-15 13:26:07.336034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.756 [2024-07-15 13:26:07.336104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-07-15 13:26:07.336129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.756 [2024-07-15 13:26:07.349181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.756 [2024-07-15 13:26:07.349263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-07-15 13:26:07.349287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.756 [2024-07-15 13:26:07.361335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.756 [2024-07-15 13:26:07.361398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-07-15 13:26:07.361423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.756 [2024-07-15 13:26:07.374160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.756 [2024-07-15 13:26:07.374256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-07-15 13:26:07.374280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.756 [2024-07-15 13:26:07.390173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.756 [2024-07-15 13:26:07.390253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-07-15 13:26:07.390279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.756 [2024-07-15 13:26:07.403218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.756 [2024-07-15 13:26:07.403285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-07-15 13:26:07.403309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.756 [2024-07-15 13:26:07.417823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.756 [2024-07-15 13:26:07.417888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-07-15 13:26:07.417914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.756 [2024-07-15 13:26:07.430715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.756 [2024-07-15 13:26:07.430791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-07-15 13:26:07.430815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.756 [2024-07-15 13:26:07.444749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.756 [2024-07-15 13:26:07.444803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-07-15 13:26:07.444826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.756 [2024-07-15 13:26:07.458609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.756 [2024-07-15 13:26:07.458664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-07-15 13:26:07.458687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.756 [2024-07-15 13:26:07.473573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.756 [2024-07-15 13:26:07.473636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-07-15 13:26:07.473659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.756 [2024-07-15 13:26:07.485245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:10.756 [2024-07-15 13:26:07.485299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-07-15 13:26:07.485328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.013 [2024-07-15 13:26:07.500333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.013 [2024-07-15 13:26:07.500385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.013 [2024-07-15 13:26:07.500408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.013 [2024-07-15 13:26:07.514844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.013 [2024-07-15 13:26:07.514900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.013 [2024-07-15 13:26:07.514923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.013 [2024-07-15 13:26:07.528156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.013 [2024-07-15 13:26:07.528237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.013 [2024-07-15 13:26:07.528262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.013 [2024-07-15 13:26:07.540709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.013 [2024-07-15 13:26:07.540768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.013 [2024-07-15 13:26:07.540792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.013 [2024-07-15 13:26:07.557029] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.013 [2024-07-15 13:26:07.557084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.013 [2024-07-15 13:26:07.557106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.013 [2024-07-15 13:26:07.569967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.013 [2024-07-15 13:26:07.570020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.013 [2024-07-15 13:26:07.570043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.013 [2024-07-15 13:26:07.583028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.013 [2024-07-15 13:26:07.583079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.013 [2024-07-15 13:26:07.583117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.013 [2024-07-15 13:26:07.598293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.013 [2024-07-15 13:26:07.598348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.013 [2024-07-15 13:26:07.598370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.013 [2024-07-15 13:26:07.612535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.013 [2024-07-15 13:26:07.612606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.013 [2024-07-15 13:26:07.612628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.013 [2024-07-15 13:26:07.626932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.013 [2024-07-15 13:26:07.626999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.013 [2024-07-15 13:26:07.627021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.014 [2024-07-15 13:26:07.640858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.014 [2024-07-15 13:26:07.640935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.014 [2024-07-15 13:26:07.640959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.014 [2024-07-15 13:26:07.652764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.014 [2024-07-15 13:26:07.652836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.014 [2024-07-15 13:26:07.652859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.014 [2024-07-15 13:26:07.667310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.014 [2024-07-15 13:26:07.667381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.014 [2024-07-15 13:26:07.667404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.014 [2024-07-15 13:26:07.681241] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.014 [2024-07-15 13:26:07.681299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.014 [2024-07-15 13:26:07.681322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.014 [2024-07-15 13:26:07.694718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.014 [2024-07-15 13:26:07.694788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.014 [2024-07-15 13:26:07.694813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.014 [2024-07-15 13:26:07.709688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.014 [2024-07-15 13:26:07.709743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.014 [2024-07-15 13:26:07.709766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.014 [2024-07-15 13:26:07.722286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.014 [2024-07-15 13:26:07.722340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.014 [2024-07-15 13:26:07.722364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.014 [2024-07-15 13:26:07.737602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.014 [2024-07-15 13:26:07.737657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.014 [2024-07-15 13:26:07.737678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.014 [2024-07-15 13:26:07.752021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.014 [2024-07-15 13:26:07.752078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.271 [2024-07-15 13:26:07.752101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.271 [2024-07-15 13:26:07.766907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.271 [2024-07-15 13:26:07.767000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.271 [2024-07-15 13:26:07.767024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.271 [2024-07-15 13:26:07.778999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.271 [2024-07-15 13:26:07.779095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.271 [2024-07-15 13:26:07.779121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.271 [2024-07-15 13:26:07.794649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.271 [2024-07-15 13:26:07.794729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.271 [2024-07-15 13:26:07.794779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.271 [2024-07-15 13:26:07.808988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.271 [2024-07-15 13:26:07.809047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.271 [2024-07-15 13:26:07.809070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.271 [2024-07-15 13:26:07.824453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.271 [2024-07-15 13:26:07.824515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.271 [2024-07-15 13:26:07.824540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.271 [2024-07-15 13:26:07.838551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.271 [2024-07-15 13:26:07.838622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.271 [2024-07-15 13:26:07.838647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.271 [2024-07-15 13:26:07.851428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.271 [2024-07-15 13:26:07.851482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.271 [2024-07-15 13:26:07.851507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.271 [2024-07-15 13:26:07.863339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.271 [2024-07-15 13:26:07.863395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.271 [2024-07-15 13:26:07.863421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.271 [2024-07-15 13:26:07.878999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.271 [2024-07-15 13:26:07.879087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.271 [2024-07-15 13:26:07.879115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.271 [2024-07-15 13:26:07.893818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.271 [2024-07-15 13:26:07.893899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.271 [2024-07-15 13:26:07.893922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.271 [2024-07-15 13:26:07.908541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.271 [2024-07-15 13:26:07.908630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.271 [2024-07-15 13:26:07.908655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.271 [2024-07-15 13:26:07.920251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.271 [2024-07-15 13:26:07.920315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.271 [2024-07-15 13:26:07.920338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.271 [2024-07-15 13:26:07.934785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.271 [2024-07-15 13:26:07.934840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.271 [2024-07-15 13:26:07.934870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.271 [2024-07-15 13:26:07.950874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.271 [2024-07-15 13:26:07.950936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.271 [2024-07-15 13:26:07.950966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.271 [2024-07-15 13:26:07.966309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.271 [2024-07-15 13:26:07.966368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.271 [2024-07-15 13:26:07.966391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.271 [2024-07-15 13:26:07.979684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.271 [2024-07-15 13:26:07.979754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.271 [2024-07-15 13:26:07.979778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.271 [2024-07-15 13:26:07.992414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.271 [2024-07-15 13:26:07.992513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.271 [2024-07-15 13:26:07.992538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.271 [2024-07-15 13:26:08.006712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.271 [2024-07-15 13:26:08.006807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.271 [2024-07-15 13:26:08.006830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.529 [2024-07-15 13:26:08.020284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.529 [2024-07-15 13:26:08.020380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.529 [2024-07-15 13:26:08.020404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.529 [2024-07-15 13:26:08.034103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.530 [2024-07-15 13:26:08.034196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.530 [2024-07-15 13:26:08.034233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.530 [2024-07-15 13:26:08.048632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.530 [2024-07-15 13:26:08.048690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.530 [2024-07-15 13:26:08.048713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.530 [2024-07-15 13:26:08.062618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.530 [2024-07-15 13:26:08.062723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.530 [2024-07-15 13:26:08.062760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.530 [2024-07-15 13:26:08.076569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.530 [2024-07-15 13:26:08.076674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.530 [2024-07-15 13:26:08.076697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.530 [2024-07-15 13:26:08.092153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.530 [2024-07-15 13:26:08.092273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.530 [2024-07-15 13:26:08.092302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.530 [2024-07-15 13:26:08.107012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.530 [2024-07-15 13:26:08.107099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.530 [2024-07-15 13:26:08.107123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.530 [2024-07-15 13:26:08.120528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.530 [2024-07-15 13:26:08.120590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.530 [2024-07-15 13:26:08.120613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.530 [2024-07-15 13:26:08.134564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.530 [2024-07-15 13:26:08.134651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.530 [2024-07-15 13:26:08.134675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.530 [2024-07-15 13:26:08.146264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.530 [2024-07-15 13:26:08.146347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.530 [2024-07-15 13:26:08.146371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.530 [2024-07-15 13:26:08.159881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.530 [2024-07-15 13:26:08.159969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.530 [2024-07-15 13:26:08.159996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.530 [2024-07-15 13:26:08.174998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.530 [2024-07-15 13:26:08.175090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.530 [2024-07-15 13:26:08.175114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.530 [2024-07-15 13:26:08.187951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.530 [2024-07-15 13:26:08.188017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.530 [2024-07-15 13:26:08.188040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.530 [2024-07-15 13:26:08.202572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.530 [2024-07-15 13:26:08.202666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.530 [2024-07-15 13:26:08.202689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.530 [2024-07-15 13:26:08.215485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.530 [2024-07-15 13:26:08.215578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.530 [2024-07-15 13:26:08.215605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.530 [2024-07-15 13:26:08.229913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.530 [2024-07-15 13:26:08.230003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.530 [2024-07-15 13:26:08.230028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.530 [2024-07-15 13:26:08.244616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.530 [2024-07-15 13:26:08.244698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.530 [2024-07-15 13:26:08.244723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.530 [2024-07-15 13:26:08.260047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.530 [2024-07-15 13:26:08.260112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.530 [2024-07-15 13:26:08.260135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.788 [2024-07-15 13:26:08.275141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.788 [2024-07-15 13:26:08.275251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.788 [2024-07-15 13:26:08.275277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.788 [2024-07-15 13:26:08.286651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.788 [2024-07-15 13:26:08.286744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.788 [2024-07-15 13:26:08.286796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.788 [2024-07-15 13:26:08.301013] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.788 [2024-07-15 13:26:08.301096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.788 [2024-07-15 13:26:08.301121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.788 [2024-07-15 13:26:08.315925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.788 [2024-07-15 13:26:08.316002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.788 [2024-07-15 13:26:08.316026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.788 [2024-07-15 13:26:08.330342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.788 [2024-07-15 13:26:08.330430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.788 [2024-07-15 13:26:08.330456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.788 [2024-07-15 13:26:08.344231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.788 [2024-07-15 13:26:08.344335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.788 [2024-07-15 13:26:08.344360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.788 [2024-07-15 13:26:08.358359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.788 [2024-07-15 13:26:08.358444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.788 [2024-07-15 13:26:08.358471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.788 [2024-07-15 13:26:08.370367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.788 [2024-07-15 13:26:08.370461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.788 [2024-07-15 13:26:08.370489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.788 [2024-07-15 13:26:08.385431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.788 [2024-07-15 13:26:08.385489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.788 [2024-07-15 13:26:08.385512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.788 [2024-07-15 13:26:08.399060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.788 [2024-07-15 13:26:08.399169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.788 [2024-07-15 13:26:08.399194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.788 [2024-07-15 13:26:08.412572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.788 [2024-07-15 13:26:08.412661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.788 [2024-07-15 13:26:08.412686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.788 [2024-07-15 13:26:08.425949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.788 [2024-07-15 13:26:08.426044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.789 [2024-07-15 13:26:08.426069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.789 [2024-07-15 13:26:08.441532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.789 [2024-07-15 13:26:08.441631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.789 [2024-07-15 13:26:08.441656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.789 [2024-07-15 13:26:08.455890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.789 [2024-07-15 13:26:08.455964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.789 [2024-07-15 13:26:08.455989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.789 [2024-07-15 13:26:08.470767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.789 [2024-07-15 13:26:08.470853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.789 [2024-07-15 13:26:08.470878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.789 [2024-07-15 13:26:08.486493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.789 [2024-07-15 13:26:08.486583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.789 [2024-07-15 13:26:08.486607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.789 [2024-07-15 13:26:08.500857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.789 [2024-07-15 13:26:08.500941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.789 [2024-07-15 13:26:08.500965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.789 [2024-07-15 13:26:08.513218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.789 [2024-07-15 13:26:08.513287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.789 [2024-07-15 13:26:08.513323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.789 [2024-07-15 13:26:08.525924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:11.789 [2024-07-15 13:26:08.525988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.789 [2024-07-15 13:26:08.526013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.047 [2024-07-15 13:26:08.540808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.047 [2024-07-15 13:26:08.540866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.047 [2024-07-15 13:26:08.540890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.047 [2024-07-15 13:26:08.554482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.047 [2024-07-15 13:26:08.554540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.047 [2024-07-15 13:26:08.554565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.047 [2024-07-15 13:26:08.569904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.047 [2024-07-15 13:26:08.569956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.047 [2024-07-15 13:26:08.569979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.047 [2024-07-15 13:26:08.584269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.047 [2024-07-15 13:26:08.584325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.047 [2024-07-15 13:26:08.584349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.047 [2024-07-15 13:26:08.598815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.047 [2024-07-15 13:26:08.598876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.047 [2024-07-15 13:26:08.598899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.047 [2024-07-15 13:26:08.612057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.047 [2024-07-15 13:26:08.612115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.047 [2024-07-15 13:26:08.612139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.047 [2024-07-15 13:26:08.626957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.047 [2024-07-15 13:26:08.627018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.047 [2024-07-15 13:26:08.627042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.047 [2024-07-15 13:26:08.641640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.047 [2024-07-15 13:26:08.641697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.047 [2024-07-15 13:26:08.641720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.047 [2024-07-15 13:26:08.655973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.047 [2024-07-15 13:26:08.656025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.047 [2024-07-15 13:26:08.656048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.047 [2024-07-15 13:26:08.668089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.047 [2024-07-15 13:26:08.668142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.047 [2024-07-15 13:26:08.668164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.047 [2024-07-15 13:26:08.685136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.047 [2024-07-15 13:26:08.685193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.047 [2024-07-15 13:26:08.685231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.048 [2024-07-15 13:26:08.698642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.048 [2024-07-15 13:26:08.698697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.048 [2024-07-15 13:26:08.698720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.048 [2024-07-15 13:26:08.712060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.048 [2024-07-15 13:26:08.712115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.048 [2024-07-15 13:26:08.712138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.048 [2024-07-15 13:26:08.723789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.048 [2024-07-15 13:26:08.723840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.048 [2024-07-15 13:26:08.723863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.048 [2024-07-15 13:26:08.737270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.048 [2024-07-15 13:26:08.737333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.048 [2024-07-15 13:26:08.737355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.048 [2024-07-15 13:26:08.751872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.048 [2024-07-15 13:26:08.751926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.048 [2024-07-15 13:26:08.751950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.048 [2024-07-15 13:26:08.765711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.048 [2024-07-15 13:26:08.765765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.048 [2024-07-15 13:26:08.765787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.048 [2024-07-15 13:26:08.780621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.048 [2024-07-15 13:26:08.780681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.048 [2024-07-15 13:26:08.780704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.306 [2024-07-15 13:26:08.794613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.306 [2024-07-15 13:26:08.794669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.306 [2024-07-15 13:26:08.794693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.306 [2024-07-15 13:26:08.808771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.306 [2024-07-15 13:26:08.808831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.306 [2024-07-15 13:26:08.808855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.306 [2024-07-15 13:26:08.821437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.306 [2024-07-15 13:26:08.821492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.306 [2024-07-15 13:26:08.821516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.306 [2024-07-15 13:26:08.834396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x644d40) 00:27:12.306 [2024-07-15 13:26:08.834446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.306 [2024-07-15 13:26:08.834470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.306 00:27:12.306 Latency(us) 00:27:12.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.306 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:12.306 nvme0n1 : 2.01 18210.89 71.14 0.00 0.00 7019.51 3813.00 18826.71 00:27:12.306 =================================================================================================================== 00:27:12.306 Total : 18210.89 71.14 0.00 0.00 7019.51 3813.00 18826.71 00:27:12.306 0 00:27:12.306 13:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:12.306 13:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:12.306 | .driver_specific 00:27:12.306 | .nvme_error 00:27:12.306 | .status_code 00:27:12.306 | .command_transient_transport_error' 00:27:12.306 13:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:12.306 13:26:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:12.564 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:27:12.564 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112017 00:27:12.564 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 112017 ']' 00:27:12.564 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 112017 00:27:12.564 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:12.564 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:12.564 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112017 00:27:12.564 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:12.564 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:12.564 killing process with pid 112017 00:27:12.564 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112017' 00:27:12.564 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 112017 00:27:12.564 Received shutdown signal, test time was about 2.000000 seconds 00:27:12.564 00:27:12.564 Latency(us) 00:27:12.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.564 =================================================================================================================== 00:27:12.564 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:12.564 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 112017 00:27:12.822 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:12.822 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:12.822 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:12.822 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:12.822 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:12.822 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112104 00:27:12.822 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:12.822 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112104 /var/tmp/bperf.sock 00:27:12.822 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 112104 ']' 00:27:12.822 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:12.822 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:12.822 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:12.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:12.822 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:12.822 13:26:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:12.822 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:12.822 Zero copy mechanism will not be used. 00:27:12.822 [2024-07-15 13:26:09.479478] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:12.822 [2024-07-15 13:26:09.479584] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112104 ] 00:27:13.080 [2024-07-15 13:26:09.618405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.080 [2024-07-15 13:26:09.719626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.013 13:26:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:14.013 13:26:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:14.013 13:26:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:14.013 13:26:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:14.271 13:26:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:14.271 13:26:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.271 13:26:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.271 13:26:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.271 13:26:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:14.271 13:26:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:14.529 nvme0n1 00:27:14.529 13:26:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:14.529 13:26:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.529 13:26:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.529 13:26:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.529 13:26:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:14.529 13:26:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:14.529 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:14.529 Zero copy mechanism will not be used. 00:27:14.529 Running I/O for 2 seconds... 00:27:14.529 [2024-07-15 13:26:11.235632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.529 [2024-07-15 13:26:11.235716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.529 [2024-07-15 13:26:11.235740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.529 [2024-07-15 13:26:11.241141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.529 [2024-07-15 13:26:11.241227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.529 [2024-07-15 13:26:11.241254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.529 [2024-07-15 13:26:11.246062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.529 [2024-07-15 13:26:11.246123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.529 [2024-07-15 13:26:11.246146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.530 [2024-07-15 13:26:11.249959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.530 [2024-07-15 13:26:11.250018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.530 [2024-07-15 13:26:11.250041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.530 [2024-07-15 13:26:11.254006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.530 [2024-07-15 13:26:11.254066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.530 [2024-07-15 13:26:11.254089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.530 [2024-07-15 13:26:11.258614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.530 [2024-07-15 13:26:11.258674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.530 [2024-07-15 13:26:11.258698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.530 [2024-07-15 13:26:11.262251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.530 [2024-07-15 13:26:11.262303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.530 [2024-07-15 13:26:11.262327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.530 [2024-07-15 13:26:11.266559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.530 [2024-07-15 13:26:11.266618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.530 [2024-07-15 13:26:11.266641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.271010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.271068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.271093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.275581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.275644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.275666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.280709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.280778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.280803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.285402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.285461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.285486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.288712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.288772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.288796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.293518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.293578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.293600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.298182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.298257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.298282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.301847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.301904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.301926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.306848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.306905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.306927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.311807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.311868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.311894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.316483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.316549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.316573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.320353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.320411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.320435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.325040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.325102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.325128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.329010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.329071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.329094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.333745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.333806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.333831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.338661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.338720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.338744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.342919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.342979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.343002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.346703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.346771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.346796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.351690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.351756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.351779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.357244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.357305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.357328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.362525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.362592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.362617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.367124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.367187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.367225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.370524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.370576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.370602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.375505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.375566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.375589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.379451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.379509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.379535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.383952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.384009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.384032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.790 [2024-07-15 13:26:11.389085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.790 [2024-07-15 13:26:11.389156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.790 [2024-07-15 13:26:11.389182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.393789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.393851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.393875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.396733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.396783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.396807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.401631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.401685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.401709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.406532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.406587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.406610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.409914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.409965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.409988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.414197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.414267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.414292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.419534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.419602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.419626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.423292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.423354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.423377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.428150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.428227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.428253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.432793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.432853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.432876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.436903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.436965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.436987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.441258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.441320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.441345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.445949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.446017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.446041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.451296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.451359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.451383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.455149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.455223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.455248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.459591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.459647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.459672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.464563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.464629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.464652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.469395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.469460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.469485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.474193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.474271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.474295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.477149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.477215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.477240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.482489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.482548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.482573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.487252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.487314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.487336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.491891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.491949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.491973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.494824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.494877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.494900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.498846] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.498901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.498924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.503105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.503163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.503187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.507676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.507741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.507765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.511953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.512009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.512033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.791 [2024-07-15 13:26:11.516143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.791 [2024-07-15 13:26:11.516233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.791 [2024-07-15 13:26:11.516258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.792 [2024-07-15 13:26:11.520718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.792 [2024-07-15 13:26:11.520803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.792 [2024-07-15 13:26:11.520828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.792 [2024-07-15 13:26:11.525535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:14.792 [2024-07-15 13:26:11.525611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.792 [2024-07-15 13:26:11.525638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.051 [2024-07-15 13:26:11.530101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.051 [2024-07-15 13:26:11.530174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.051 [2024-07-15 13:26:11.530200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.051 [2024-07-15 13:26:11.535122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.051 [2024-07-15 13:26:11.535227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.051 [2024-07-15 13:26:11.535254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.051 [2024-07-15 13:26:11.539605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.051 [2024-07-15 13:26:11.539697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.051 [2024-07-15 13:26:11.539723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.051 [2024-07-15 13:26:11.544167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.051 [2024-07-15 13:26:11.544266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.051 [2024-07-15 13:26:11.544293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.051 [2024-07-15 13:26:11.548259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.051 [2024-07-15 13:26:11.548339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.051 [2024-07-15 13:26:11.548364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.051 [2024-07-15 13:26:11.552818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.051 [2024-07-15 13:26:11.552887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.051 [2024-07-15 13:26:11.552912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.051 [2024-07-15 13:26:11.558251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.051 [2024-07-15 13:26:11.558323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.051 [2024-07-15 13:26:11.558349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.051 [2024-07-15 13:26:11.563364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.051 [2024-07-15 13:26:11.563431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.051 [2024-07-15 13:26:11.563454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.051 [2024-07-15 13:26:11.566446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.051 [2024-07-15 13:26:11.566501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.051 [2024-07-15 13:26:11.566525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.051 [2024-07-15 13:26:11.571827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.051 [2024-07-15 13:26:11.571902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.051 [2024-07-15 13:26:11.571927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.051 [2024-07-15 13:26:11.576849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.051 [2024-07-15 13:26:11.576918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.051 [2024-07-15 13:26:11.576941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.051 [2024-07-15 13:26:11.581575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.051 [2024-07-15 13:26:11.581638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.051 [2024-07-15 13:26:11.581663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.051 [2024-07-15 13:26:11.585032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.051 [2024-07-15 13:26:11.585104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.051 [2024-07-15 13:26:11.585128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.051 [2024-07-15 13:26:11.590539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.051 [2024-07-15 13:26:11.590606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.051 [2024-07-15 13:26:11.590630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.051 [2024-07-15 13:26:11.595141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.051 [2024-07-15 13:26:11.595233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.051 [2024-07-15 13:26:11.595260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.051 [2024-07-15 13:26:11.598432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.051 [2024-07-15 13:26:11.598487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.051 [2024-07-15 13:26:11.598510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.051 [2024-07-15 13:26:11.603035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.051 [2024-07-15 13:26:11.603103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.051 [2024-07-15 13:26:11.603126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.051 [2024-07-15 13:26:11.607860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.051 [2024-07-15 13:26:11.607927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.051 [2024-07-15 13:26:11.607951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.051 [2024-07-15 13:26:11.612735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.051 [2024-07-15 13:26:11.612798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.051 [2024-07-15 13:26:11.612821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.051 [2024-07-15 13:26:11.618218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.051 [2024-07-15 13:26:11.618293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.051 [2024-07-15 13:26:11.618317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.051 [2024-07-15 13:26:11.621648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.051 [2024-07-15 13:26:11.621706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.052 [2024-07-15 13:26:11.621731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.052 [2024-07-15 13:26:11.626441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.052 [2024-07-15 13:26:11.626510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.052 [2024-07-15 13:26:11.626533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.052 [2024-07-15 13:26:11.631074] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.052 [2024-07-15 13:26:11.631140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.052 [2024-07-15 13:26:11.631165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.052 [2024-07-15 13:26:11.635993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.052 [2024-07-15 13:26:11.636089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.052 [2024-07-15 13:26:11.636114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.052 [2024-07-15 13:26:11.639124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.052 [2024-07-15 13:26:11.639193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.052 [2024-07-15 13:26:11.639231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.052 [2024-07-15 13:26:11.644053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.052 [2024-07-15 13:26:11.644133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.052 [2024-07-15 13:26:11.644159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.052 [2024-07-15 13:26:11.648652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.052 [2024-07-15 13:26:11.648725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.052 [2024-07-15 13:26:11.648747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.052 [2024-07-15 13:26:11.653430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.052 [2024-07-15 13:26:11.653509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.052 [2024-07-15 13:26:11.653533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.052 [2024-07-15 13:26:11.657184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.052 [2024-07-15 13:26:11.657256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.052 [2024-07-15 13:26:11.657281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.052 [2024-07-15 13:26:11.661828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.052 [2024-07-15 13:26:11.661898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.052 [2024-07-15 13:26:11.661924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.052 [2024-07-15 13:26:11.666741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.052 [2024-07-15 13:26:11.666826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.052 [2024-07-15 13:26:11.666851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.052 [2024-07-15 13:26:11.671836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.052 [2024-07-15 13:26:11.671914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.052 [2024-07-15 13:26:11.671940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.052 [2024-07-15 13:26:11.677433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.052 [2024-07-15 13:26:11.677515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.052 [2024-07-15 13:26:11.677539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.052 [2024-07-15 13:26:11.682333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.052 [2024-07-15 13:26:11.682405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.052 [2024-07-15 13:26:11.682429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.052 [2024-07-15 13:26:11.685910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.052 [2024-07-15 13:26:11.685967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.052 [2024-07-15 13:26:11.685991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.052 [2024-07-15 13:26:11.690393] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.052 [2024-07-15 13:26:11.690451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.052 [2024-07-15 13:26:11.690475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.052 [2024-07-15 13:26:11.695445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.052 [2024-07-15 13:26:11.695511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.052 [2024-07-15 13:26:11.695535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.052 [2024-07-15 13:26:11.699115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.052 [2024-07-15 13:26:11.699180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.052 [2024-07-15 13:26:11.699223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.052 [2024-07-15 13:26:11.704112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.052 [2024-07-15 13:26:11.704184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.052 [2024-07-15 13:26:11.704224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.052 [2024-07-15 13:26:11.709120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.052 [2024-07-15 13:26:11.709189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.052 [2024-07-15 13:26:11.709226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.052 [2024-07-15 13:26:11.714372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.052 [2024-07-15 13:26:11.714437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.052 [2024-07-15 13:26:11.714459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.052 [2024-07-15 13:26:11.717385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.053 [2024-07-15 13:26:11.717439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.053 [2024-07-15 13:26:11.717464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.053 [2024-07-15 13:26:11.722357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.053 [2024-07-15 13:26:11.722425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.053 [2024-07-15 13:26:11.722449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.053 [2024-07-15 13:26:11.726904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.053 [2024-07-15 13:26:11.726967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.053 [2024-07-15 13:26:11.726992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.053 [2024-07-15 13:26:11.731706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.053 [2024-07-15 13:26:11.731777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.053 [2024-07-15 13:26:11.731802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.053 [2024-07-15 13:26:11.736056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.053 [2024-07-15 13:26:11.736121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.053 [2024-07-15 13:26:11.736145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.053 [2024-07-15 13:26:11.741079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.053 [2024-07-15 13:26:11.741159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.053 [2024-07-15 13:26:11.741184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.053 [2024-07-15 13:26:11.744709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.053 [2024-07-15 13:26:11.744781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.053 [2024-07-15 13:26:11.744806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.053 [2024-07-15 13:26:11.749647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.053 [2024-07-15 13:26:11.749726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.053 [2024-07-15 13:26:11.749751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.053 [2024-07-15 13:26:11.754912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.053 [2024-07-15 13:26:11.755020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.053 [2024-07-15 13:26:11.755044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.053 [2024-07-15 13:26:11.759969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.053 [2024-07-15 13:26:11.760055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.053 [2024-07-15 13:26:11.760081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.053 [2024-07-15 13:26:11.764574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.053 [2024-07-15 13:26:11.764654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.053 [2024-07-15 13:26:11.764678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.053 [2024-07-15 13:26:11.768194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.053 [2024-07-15 13:26:11.768284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.053 [2024-07-15 13:26:11.768309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.053 [2024-07-15 13:26:11.772766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.053 [2024-07-15 13:26:11.772846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.053 [2024-07-15 13:26:11.772871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.053 [2024-07-15 13:26:11.777642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.053 [2024-07-15 13:26:11.777727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.053 [2024-07-15 13:26:11.777752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.053 [2024-07-15 13:26:11.782224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.053 [2024-07-15 13:26:11.782297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.053 [2024-07-15 13:26:11.782321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.053 [2024-07-15 13:26:11.787475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.053 [2024-07-15 13:26:11.787539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.053 [2024-07-15 13:26:11.787563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.791192] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.791262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.791286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.795646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.795707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.795730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.800866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.800928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.800951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.805400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.805463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.805488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.809783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.809864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.809889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.813157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.813239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.813263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.818076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.818162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.818188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.822334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.822419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.822444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.827736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.827826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.827850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.832776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.832858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.832884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.837262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.837338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.837361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.840478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.840545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.840570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.845884] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.845979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.846005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.850943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.851026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.851051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.855726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.855790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.855815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.861175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.861258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.861284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.864461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.864514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.864537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.869413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.869485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.869509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.874288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.874355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.874380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.879740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.879832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.879859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.884996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.885086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.885113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.888175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.888265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.888293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.893953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.894035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.894060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.898299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.898376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.898401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.902550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.902631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.902655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.312 [2024-07-15 13:26:11.907635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.312 [2024-07-15 13:26:11.907724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.312 [2024-07-15 13:26:11.907749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:11.912524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:11.912620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:11.912657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:11.916985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:11.917056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:11.917083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:11.921759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:11.921837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:11.921866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:11.926914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:11.926984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:11.927010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:11.931986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:11.932050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:11.932075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:11.937022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:11.937087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:11.937111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:11.940191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:11.940256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:11.940279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:11.945271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:11.945342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:11.945367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:11.949840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:11.949906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:11.949931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:11.953753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:11.953819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:11.953843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:11.957575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:11.957643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:11.957666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:11.962445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:11.962521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:11.962547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:11.967355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:11.967473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:11.967507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:11.972146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:11.972276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:11.972311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:11.977433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:11.977531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:11.977556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:11.980654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:11.980730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:11.980754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:11.986089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:11.986176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:11.986219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:11.991618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:11.991708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:11.991733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:11.996847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:11.996928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:11.996953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:11.999981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:12.000039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:12.000062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:12.004275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:12.004350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:12.004373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:12.008837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:12.008913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:12.008938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:12.013901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:12.013973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:12.013998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:12.017515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:12.017582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:12.017607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:12.022993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:12.023078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:12.023104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:12.028890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:12.028973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:12.029000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:12.034181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:12.034300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:12.034328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:12.038503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:12.038601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.313 [2024-07-15 13:26:12.038626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.313 [2024-07-15 13:26:12.043933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.313 [2024-07-15 13:26:12.044022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.314 [2024-07-15 13:26:12.044047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.572 [2024-07-15 13:26:12.049907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.572 [2024-07-15 13:26:12.050005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.572 [2024-07-15 13:26:12.050031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.572 [2024-07-15 13:26:12.056011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.572 [2024-07-15 13:26:12.056109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.572 [2024-07-15 13:26:12.056135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.572 [2024-07-15 13:26:12.061350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.572 [2024-07-15 13:26:12.061441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.572 [2024-07-15 13:26:12.061466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.572 [2024-07-15 13:26:12.064468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.572 [2024-07-15 13:26:12.064527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.572 [2024-07-15 13:26:12.064552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.572 [2024-07-15 13:26:12.069338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.572 [2024-07-15 13:26:12.069412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.572 [2024-07-15 13:26:12.069435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.572 [2024-07-15 13:26:12.074131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.572 [2024-07-15 13:26:12.074221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.572 [2024-07-15 13:26:12.074248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.572 [2024-07-15 13:26:12.078081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.572 [2024-07-15 13:26:12.078147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.572 [2024-07-15 13:26:12.078173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.572 [2024-07-15 13:26:12.083076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.572 [2024-07-15 13:26:12.083164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.572 [2024-07-15 13:26:12.083188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.572 [2024-07-15 13:26:12.087790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.572 [2024-07-15 13:26:12.087877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.572 [2024-07-15 13:26:12.087901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.572 [2024-07-15 13:26:12.091904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.572 [2024-07-15 13:26:12.091999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.572 [2024-07-15 13:26:12.092027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.572 [2024-07-15 13:26:12.097775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.572 [2024-07-15 13:26:12.097883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.572 [2024-07-15 13:26:12.097910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.572 [2024-07-15 13:26:12.102915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.572 [2024-07-15 13:26:12.102998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.103024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.106188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.106273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.106299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.110813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.110894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.110919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.116542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.116629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.116654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.120353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.120434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.120463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.124329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.124417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.124442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.128963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.129035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.129060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.133913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.133984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.134007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.138978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.139046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.139071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.142573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.142636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.142661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.147816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.147907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.147931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.153665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.153758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.153783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.157367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.157454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.157481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.162177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.162280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.162304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.167757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.167850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.167874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.173240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.173330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.173355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.177099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.177180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.177219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.181709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.181793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.181823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.186375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.186460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.186483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.191382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.191460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.191484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.196592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.196673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.196698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.199711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.199763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.199789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.204525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.204607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.204633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.209870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.209948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.209973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.214923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.215018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.215043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.219812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.219897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.219923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.223789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.573 [2024-07-15 13:26:12.223865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.573 [2024-07-15 13:26:12.223891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.573 [2024-07-15 13:26:12.229080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.574 [2024-07-15 13:26:12.229182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.574 [2024-07-15 13:26:12.229224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.574 [2024-07-15 13:26:12.234682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.574 [2024-07-15 13:26:12.234785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.574 [2024-07-15 13:26:12.234812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.574 [2024-07-15 13:26:12.239433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.574 [2024-07-15 13:26:12.239532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.574 [2024-07-15 13:26:12.239558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.574 [2024-07-15 13:26:12.244264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.574 [2024-07-15 13:26:12.244371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.574 [2024-07-15 13:26:12.244398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.574 [2024-07-15 13:26:12.249060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.574 [2024-07-15 13:26:12.249146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.574 [2024-07-15 13:26:12.249171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.574 [2024-07-15 13:26:12.253848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.574 [2024-07-15 13:26:12.253936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.574 [2024-07-15 13:26:12.253962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.574 [2024-07-15 13:26:12.258539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.574 [2024-07-15 13:26:12.258615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.574 [2024-07-15 13:26:12.258640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.574 [2024-07-15 13:26:12.262819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.574 [2024-07-15 13:26:12.262883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.574 [2024-07-15 13:26:12.262907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.574 [2024-07-15 13:26:12.266256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.574 [2024-07-15 13:26:12.266311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.574 [2024-07-15 13:26:12.266336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.574 [2024-07-15 13:26:12.270697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.574 [2024-07-15 13:26:12.270773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.574 [2024-07-15 13:26:12.270799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.574 [2024-07-15 13:26:12.275252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.574 [2024-07-15 13:26:12.275327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.574 [2024-07-15 13:26:12.275352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.574 [2024-07-15 13:26:12.280142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.574 [2024-07-15 13:26:12.280236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.574 [2024-07-15 13:26:12.280261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.574 [2024-07-15 13:26:12.284118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.574 [2024-07-15 13:26:12.284196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.574 [2024-07-15 13:26:12.284241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.574 [2024-07-15 13:26:12.288819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.574 [2024-07-15 13:26:12.288893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.574 [2024-07-15 13:26:12.288917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.574 [2024-07-15 13:26:12.294308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.574 [2024-07-15 13:26:12.294381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.574 [2024-07-15 13:26:12.294405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.574 [2024-07-15 13:26:12.297684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.574 [2024-07-15 13:26:12.297750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.574 [2024-07-15 13:26:12.297774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.574 [2024-07-15 13:26:12.301987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.574 [2024-07-15 13:26:12.302063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.574 [2024-07-15 13:26:12.302085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.574 [2024-07-15 13:26:12.306916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.574 [2024-07-15 13:26:12.306991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.574 [2024-07-15 13:26:12.307015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.311865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.311947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.311972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.316136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.316255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.316304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.319728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.319796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.319824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.324631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.324718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.324742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.329659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.329754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.329780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.333852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.333930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.333954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.339239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.339335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.339361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.344540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.344638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.344661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.349744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.349822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.349846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.353623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.353680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.353703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.357772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.357845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.357871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.363064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.363134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.363159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.366495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.366556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.366579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.371775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.371856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.371880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.376564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.376636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.376660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.380443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.380515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.380546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.386551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.386636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.386662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.390725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.390801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.390825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.395506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.395587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.395615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.399947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.400038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.400064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.405318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.405409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.405435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.411364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.411450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.411476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.415350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.415422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.415446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.834 [2024-07-15 13:26:12.419882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.834 [2024-07-15 13:26:12.419953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.834 [2024-07-15 13:26:12.419978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.425483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.425568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.425593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.430173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.430256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.430283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.434469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.434556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.434579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.439382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.439463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.439489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.443698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.443768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.443791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.447892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.447974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.448000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.452316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.452390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.452414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.456765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.456842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.456867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.461360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.461430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.461454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.465838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.465915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.465939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.470313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.470392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.470416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.475018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.475092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.475117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.479837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.479909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.479945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.483609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.483675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.483700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.488556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.488645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.488669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.492997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.493070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.493094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.496707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.496768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.496791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.501480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.501557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.501583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.505398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.505464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.505486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.510409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.510483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.510507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.515698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.515780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.515805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.520375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.520447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.520471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.525712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.525798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.525821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.529425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.529492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.529517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.534872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.534959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.534987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.540365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.540451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.540480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.545232] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.545306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.545330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.548249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.548314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.548339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.835 [2024-07-15 13:26:12.553347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.835 [2024-07-15 13:26:12.553429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.835 [2024-07-15 13:26:12.553455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.836 [2024-07-15 13:26:12.557028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.836 [2024-07-15 13:26:12.557097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.836 [2024-07-15 13:26:12.557122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.836 [2024-07-15 13:26:12.561433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.836 [2024-07-15 13:26:12.561500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.836 [2024-07-15 13:26:12.561525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.836 [2024-07-15 13:26:12.566166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.836 [2024-07-15 13:26:12.566256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.836 [2024-07-15 13:26:12.566282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.836 [2024-07-15 13:26:12.571193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:15.836 [2024-07-15 13:26:12.571280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.836 [2024-07-15 13:26:12.571304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.095 [2024-07-15 13:26:12.575518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.095 [2024-07-15 13:26:12.575589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.095 [2024-07-15 13:26:12.575613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.095 [2024-07-15 13:26:12.580189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.095 [2024-07-15 13:26:12.580260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.095 [2024-07-15 13:26:12.580285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.095 [2024-07-15 13:26:12.584318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.095 [2024-07-15 13:26:12.584391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.095 [2024-07-15 13:26:12.584415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.095 [2024-07-15 13:26:12.588964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.095 [2024-07-15 13:26:12.589039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.095 [2024-07-15 13:26:12.589063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.095 [2024-07-15 13:26:12.593697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.095 [2024-07-15 13:26:12.593763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.095 [2024-07-15 13:26:12.593787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.095 [2024-07-15 13:26:12.597556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.095 [2024-07-15 13:26:12.597624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.095 [2024-07-15 13:26:12.597649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.095 [2024-07-15 13:26:12.602811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.095 [2024-07-15 13:26:12.602882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.095 [2024-07-15 13:26:12.602905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.095 [2024-07-15 13:26:12.608101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.095 [2024-07-15 13:26:12.608165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.095 [2024-07-15 13:26:12.608190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.095 [2024-07-15 13:26:12.611201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.095 [2024-07-15 13:26:12.611285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.095 [2024-07-15 13:26:12.611310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.095 [2024-07-15 13:26:12.615830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.095 [2024-07-15 13:26:12.615896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.095 [2024-07-15 13:26:12.615921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.095 [2024-07-15 13:26:12.621699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.095 [2024-07-15 13:26:12.621784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.095 [2024-07-15 13:26:12.621807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.095 [2024-07-15 13:26:12.627278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.095 [2024-07-15 13:26:12.627351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.095 [2024-07-15 13:26:12.627377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.095 [2024-07-15 13:26:12.630946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.095 [2024-07-15 13:26:12.631010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.095 [2024-07-15 13:26:12.631035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.095 [2024-07-15 13:26:12.635758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.095 [2024-07-15 13:26:12.635833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.635858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.639594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.639671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.639695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.644624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.644705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.644730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.649839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.649920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.649955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.654034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.654109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.654132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.659061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.659151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.659176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.664488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.664567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.664592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.668919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.668997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.669021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.674066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.674158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.674182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.679573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.679651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.679675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.683925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.684001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.684027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.687763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.687837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.687860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.692454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.692535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.692563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.697137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.697218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.697244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.701925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.702003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.702029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.706265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.706343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.706367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.710764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.710850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.710876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.715807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.715892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.715916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.720171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.720266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.720292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.724986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.725071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.725097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.728883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.728957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.728981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.733954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.734030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.734057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.739496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.739571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.739598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.743552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.743621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.743645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.748364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.748452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.748477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.753077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.753158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.753182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.757430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.757505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.757529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.761692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.761760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.761784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.766316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.766403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.096 [2024-07-15 13:26:12.766430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.096 [2024-07-15 13:26:12.770122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.096 [2024-07-15 13:26:12.770184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.097 [2024-07-15 13:26:12.770221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.097 [2024-07-15 13:26:12.774770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.097 [2024-07-15 13:26:12.774854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.097 [2024-07-15 13:26:12.774879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.097 [2024-07-15 13:26:12.779519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.097 [2024-07-15 13:26:12.779589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.097 [2024-07-15 13:26:12.779613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.097 [2024-07-15 13:26:12.784248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.097 [2024-07-15 13:26:12.784339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.097 [2024-07-15 13:26:12.784366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.097 [2024-07-15 13:26:12.788059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.097 [2024-07-15 13:26:12.788144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.097 [2024-07-15 13:26:12.788167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.097 [2024-07-15 13:26:12.792592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.097 [2024-07-15 13:26:12.792663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.097 [2024-07-15 13:26:12.792689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.097 [2024-07-15 13:26:12.797275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.097 [2024-07-15 13:26:12.797365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.097 [2024-07-15 13:26:12.797391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.097 [2024-07-15 13:26:12.801149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.097 [2024-07-15 13:26:12.801249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.097 [2024-07-15 13:26:12.801274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.097 [2024-07-15 13:26:12.806005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.097 [2024-07-15 13:26:12.806083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.097 [2024-07-15 13:26:12.806109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.097 [2024-07-15 13:26:12.810891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.097 [2024-07-15 13:26:12.810974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.097 [2024-07-15 13:26:12.810999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.097 [2024-07-15 13:26:12.815772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.097 [2024-07-15 13:26:12.815849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.097 [2024-07-15 13:26:12.815875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.097 [2024-07-15 13:26:12.818834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.097 [2024-07-15 13:26:12.818910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.097 [2024-07-15 13:26:12.818938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.097 [2024-07-15 13:26:12.823764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.097 [2024-07-15 13:26:12.823841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.097 [2024-07-15 13:26:12.823864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.097 [2024-07-15 13:26:12.828487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.097 [2024-07-15 13:26:12.828583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.097 [2024-07-15 13:26:12.828610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.097 [2024-07-15 13:26:12.831909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.097 [2024-07-15 13:26:12.831979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.097 [2024-07-15 13:26:12.832004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.357 [2024-07-15 13:26:12.837398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.357 [2024-07-15 13:26:12.837476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.357 [2024-07-15 13:26:12.837502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.357 [2024-07-15 13:26:12.842450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.357 [2024-07-15 13:26:12.842537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.357 [2024-07-15 13:26:12.842562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.357 [2024-07-15 13:26:12.847067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.357 [2024-07-15 13:26:12.847162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.357 [2024-07-15 13:26:12.847187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.357 [2024-07-15 13:26:12.850807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.357 [2024-07-15 13:26:12.850895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.357 [2024-07-15 13:26:12.850921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.357 [2024-07-15 13:26:12.855049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.357 [2024-07-15 13:26:12.855137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.357 [2024-07-15 13:26:12.855160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.357 [2024-07-15 13:26:12.860359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.357 [2024-07-15 13:26:12.860465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.357 [2024-07-15 13:26:12.860490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.357 [2024-07-15 13:26:12.864267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.357 [2024-07-15 13:26:12.864357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.357 [2024-07-15 13:26:12.864381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.357 [2024-07-15 13:26:12.869070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.357 [2024-07-15 13:26:12.869146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.357 [2024-07-15 13:26:12.869172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.357 [2024-07-15 13:26:12.873413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.357 [2024-07-15 13:26:12.873492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.357 [2024-07-15 13:26:12.873515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.357 [2024-07-15 13:26:12.877974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.357 [2024-07-15 13:26:12.878051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.357 [2024-07-15 13:26:12.878076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.357 [2024-07-15 13:26:12.882104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.357 [2024-07-15 13:26:12.882184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.357 [2024-07-15 13:26:12.882223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.357 [2024-07-15 13:26:12.886724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.357 [2024-07-15 13:26:12.886813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.357 [2024-07-15 13:26:12.886837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.357 [2024-07-15 13:26:12.890645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.357 [2024-07-15 13:26:12.890712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.357 [2024-07-15 13:26:12.890737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.357 [2024-07-15 13:26:12.895195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.895307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.895333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.900050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.900127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.900153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.904430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.904500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.904524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.908363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.908426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.908450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.913398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.913479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.913504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.918685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.918765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.918791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.923154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.923237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.923263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.927070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.927158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.927184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.931771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.931839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.931864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.936948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.937018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.937043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.941392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.941455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.941480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.944993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.945049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.945072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.949608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.949673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.949698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.953284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.953344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.953367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.958336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.958413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.958440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.962683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.962762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.962788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.967392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.967464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.967488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.972364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.972441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.972465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.977713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.977785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.977809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.980746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.980817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.980843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.985778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.985867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.985894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.991372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.991448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.991474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.996057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.996139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:12.996162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:12.999920] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:12.999989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:13.000013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:13.004520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:13.004601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:13.004626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:13.008269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:13.008335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:13.008360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:13.013256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:13.013342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:13.013367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:13.017974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:13.018054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:13.018078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.358 [2024-07-15 13:26:13.023344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.358 [2024-07-15 13:26:13.023425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.358 [2024-07-15 13:26:13.023450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.359 [2024-07-15 13:26:13.028015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.359 [2024-07-15 13:26:13.028097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.359 [2024-07-15 13:26:13.028120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.359 [2024-07-15 13:26:13.031001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.359 [2024-07-15 13:26:13.031061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.359 [2024-07-15 13:26:13.031087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.359 [2024-07-15 13:26:13.035164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.359 [2024-07-15 13:26:13.035253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.359 [2024-07-15 13:26:13.035278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.359 [2024-07-15 13:26:13.040775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.359 [2024-07-15 13:26:13.040866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.359 [2024-07-15 13:26:13.040894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.359 [2024-07-15 13:26:13.045321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.359 [2024-07-15 13:26:13.045387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.359 [2024-07-15 13:26:13.045412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.359 [2024-07-15 13:26:13.049107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.359 [2024-07-15 13:26:13.049178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.359 [2024-07-15 13:26:13.049215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.359 [2024-07-15 13:26:13.053698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.359 [2024-07-15 13:26:13.053766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.359 [2024-07-15 13:26:13.053790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.359 [2024-07-15 13:26:13.058400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.359 [2024-07-15 13:26:13.058467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.359 [2024-07-15 13:26:13.058490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.359 [2024-07-15 13:26:13.062516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.359 [2024-07-15 13:26:13.062575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.359 [2024-07-15 13:26:13.062598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.359 [2024-07-15 13:26:13.067258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.359 [2024-07-15 13:26:13.067318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.359 [2024-07-15 13:26:13.067341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.359 [2024-07-15 13:26:13.071083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.359 [2024-07-15 13:26:13.071143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.359 [2024-07-15 13:26:13.071165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.359 [2024-07-15 13:26:13.075601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.359 [2024-07-15 13:26:13.075664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.359 [2024-07-15 13:26:13.075686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.359 [2024-07-15 13:26:13.080022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.359 [2024-07-15 13:26:13.080084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.359 [2024-07-15 13:26:13.080108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.359 [2024-07-15 13:26:13.083606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.359 [2024-07-15 13:26:13.083667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.359 [2024-07-15 13:26:13.083690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.359 [2024-07-15 13:26:13.088441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.359 [2024-07-15 13:26:13.088507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.359 [2024-07-15 13:26:13.088532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.359 [2024-07-15 13:26:13.092953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.359 [2024-07-15 13:26:13.093018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.359 [2024-07-15 13:26:13.093042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.618 [2024-07-15 13:26:13.097752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.618 [2024-07-15 13:26:13.097817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.618 [2024-07-15 13:26:13.097840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.618 [2024-07-15 13:26:13.100720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.100770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.100793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.105718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.105794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.105818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.110630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.110697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.110719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.114620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.114677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.114700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.118706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.118769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.118792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.123519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.123580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.123604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.127521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.127577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.127601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.132164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.132242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.132267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.137407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.137469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.137494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.140561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.140615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.140638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.144677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.144732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.144754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.149915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.149989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.150012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.154200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.154268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.154291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.158359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.158411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.158434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.162818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.162872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.162895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.166969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.167027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.167051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.171569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.171630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.171653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.175637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.175697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.175720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.179637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.179699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.179723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.184053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.184117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.184143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.189161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.189241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.189264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.193964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.194024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.194048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.198942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.199000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.199022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.202423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.202478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.202501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.207024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.207081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.207104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.212107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.212180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.212217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.216733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.216789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.216812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.619 [2024-07-15 13:26:13.220408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe3b080) 00:27:16.619 [2024-07-15 13:26:13.220458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.619 [2024-07-15 13:26:13.220481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.619 00:27:16.619 Latency(us) 00:27:16.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.620 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:16.620 nvme0n1 : 2.04 6616.23 827.03 0.00 0.00 2372.18 595.78 41466.41 00:27:16.620 =================================================================================================================== 00:27:16.620 Total : 6616.23 827.03 0.00 0.00 2372.18 595.78 41466.41 00:27:16.620 0 00:27:16.620 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:16.620 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:16.620 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:16.620 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:16.620 | .driver_specific 00:27:16.620 | .nvme_error 00:27:16.620 | .status_code 00:27:16.620 | .command_transient_transport_error' 00:27:16.878 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 435 > 0 )) 00:27:16.878 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112104 00:27:16.878 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 112104 ']' 00:27:16.878 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 112104 00:27:16.878 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:16.878 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:16.878 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112104 00:27:16.878 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:16.878 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:16.878 killing process with pid 112104 00:27:16.878 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112104' 00:27:16.878 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 112104 00:27:16.878 Received shutdown signal, test time was about 2.000000 seconds 00:27:16.878 00:27:16.878 Latency(us) 00:27:16.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.878 =================================================================================================================== 00:27:16.878 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:16.878 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 112104 00:27:17.136 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:17.136 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:17.136 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:17.136 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:17.136 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:17.136 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:17.136 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112189 00:27:17.136 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112189 /var/tmp/bperf.sock 00:27:17.136 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 112189 ']' 00:27:17.136 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:17.136 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:17.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:17.136 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:17.136 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:17.136 13:26:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:17.136 [2024-07-15 13:26:13.864412] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:17.136 [2024-07-15 13:26:13.864511] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112189 ] 00:27:17.394 [2024-07-15 13:26:13.998035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.394 [2024-07-15 13:26:14.116714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.328 13:26:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:18.328 13:26:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:18.328 13:26:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:18.328 13:26:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:18.586 13:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:18.586 13:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.586 13:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:18.586 13:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.586 13:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:18.586 13:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:18.844 nvme0n1 00:27:18.844 13:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:18.844 13:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.844 13:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:18.844 13:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.844 13:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:18.844 13:26:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:19.102 Running I/O for 2 seconds... 00:27:19.102 [2024-07-15 13:26:15.644856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f6458 00:27:19.102 [2024-07-15 13:26:15.646056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.102 [2024-07-15 13:26:15.646097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:19.102 [2024-07-15 13:26:15.657262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f4f40 00:27:19.102 [2024-07-15 13:26:15.658422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.102 [2024-07-15 13:26:15.658469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:19.102 [2024-07-15 13:26:15.668891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190ef6a8 00:27:19.102 [2024-07-15 13:26:15.669886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.102 [2024-07-15 13:26:15.669925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:19.102 [2024-07-15 13:26:15.680655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190fd208 00:27:19.102 [2024-07-15 13:26:15.681630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.102 [2024-07-15 13:26:15.681667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:19.102 [2024-07-15 13:26:15.693201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190ea680 00:27:19.102 [2024-07-15 13:26:15.694372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.102 [2024-07-15 13:26:15.694410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:19.102 [2024-07-15 13:26:15.705523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190eb328 00:27:19.102 [2024-07-15 13:26:15.706692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.102 [2024-07-15 13:26:15.706734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:19.102 [2024-07-15 13:26:15.719352] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190df550 00:27:19.102 [2024-07-15 13:26:15.721055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.102 [2024-07-15 13:26:15.721096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:19.102 [2024-07-15 13:26:15.730438] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f7538 00:27:19.102 [2024-07-15 13:26:15.732181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.102 [2024-07-15 13:26:15.732242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.102 [2024-07-15 13:26:15.743528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190ef270 00:27:19.102 [2024-07-15 13:26:15.744566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.102 [2024-07-15 13:26:15.744604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.102 [2024-07-15 13:26:15.755063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190fcdd0 00:27:19.102 [2024-07-15 13:26:15.756273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.102 [2024-07-15 13:26:15.756311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:19.102 [2024-07-15 13:26:15.766622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e12d8 00:27:19.102 [2024-07-15 13:26:15.767934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.102 [2024-07-15 13:26:15.767969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:19.102 [2024-07-15 13:26:15.780867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f8618 00:27:19.102 [2024-07-15 13:26:15.782985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.102 [2024-07-15 13:26:15.783027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:19.103 [2024-07-15 13:26:15.789599] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e9e10 00:27:19.103 [2024-07-15 13:26:15.790626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.103 [2024-07-15 13:26:15.790662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:19.103 [2024-07-15 13:26:15.804075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e4de8 00:27:19.103 [2024-07-15 13:26:15.805660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.103 [2024-07-15 13:26:15.805699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:19.103 [2024-07-15 13:26:15.815576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e4578 00:27:19.103 [2024-07-15 13:26:15.816974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.103 [2024-07-15 13:26:15.817013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:19.103 [2024-07-15 13:26:15.826803] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190fac10 00:27:19.103 [2024-07-15 13:26:15.827937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.103 [2024-07-15 13:26:15.827980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:19.103 [2024-07-15 13:26:15.838451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e84c0 00:27:19.103 [2024-07-15 13:26:15.839500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.103 [2024-07-15 13:26:15.839539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:19.382 [2024-07-15 13:26:15.849812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e8088 00:27:19.382 [2024-07-15 13:26:15.850717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.382 [2024-07-15 13:26:15.850774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.383 [2024-07-15 13:26:15.865446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f1868 00:27:19.383 [2024-07-15 13:26:15.867634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.383 [2024-07-15 13:26:15.867683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.383 [2024-07-15 13:26:15.874283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e3d08 00:27:19.383 [2024-07-15 13:26:15.875405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.383 [2024-07-15 13:26:15.875457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.383 [2024-07-15 13:26:15.888942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190ef6a8 00:27:19.383 [2024-07-15 13:26:15.890829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.383 [2024-07-15 13:26:15.890879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:19.383 [2024-07-15 13:26:15.901187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e84c0 00:27:19.383 [2024-07-15 13:26:15.903004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.383 [2024-07-15 13:26:15.903047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.383 [2024-07-15 13:26:15.909539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190de470 00:27:19.383 [2024-07-15 13:26:15.910296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.383 [2024-07-15 13:26:15.910336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:19.383 [2024-07-15 13:26:15.924237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e01f8 00:27:19.383 [2024-07-15 13:26:15.925760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.383 [2024-07-15 13:26:15.925805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.383 [2024-07-15 13:26:15.936526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f8618 00:27:19.383 [2024-07-15 13:26:15.937513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.383 [2024-07-15 13:26:15.937556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:19.383 [2024-07-15 13:26:15.948668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e5658 00:27:19.383 [2024-07-15 13:26:15.949982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.383 [2024-07-15 13:26:15.950024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:19.383 [2024-07-15 13:26:15.959897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e9168 00:27:19.383 [2024-07-15 13:26:15.961047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.383 [2024-07-15 13:26:15.961088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:19.383 [2024-07-15 13:26:15.971652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e1f80 00:27:19.383 [2024-07-15 13:26:15.972772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.383 [2024-07-15 13:26:15.972812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:19.383 [2024-07-15 13:26:15.986197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f2948 00:27:19.383 [2024-07-15 13:26:15.988106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.383 [2024-07-15 13:26:15.988152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:19.383 [2024-07-15 13:26:15.995062] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190fc998 00:27:19.383 [2024-07-15 13:26:15.995907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.383 [2024-07-15 13:26:15.995951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:19.383 [2024-07-15 13:26:16.009813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190fef90 00:27:19.383 [2024-07-15 13:26:16.011410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.383 [2024-07-15 13:26:16.011454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:19.383 [2024-07-15 13:26:16.022144] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f81e0 00:27:19.383 [2024-07-15 13:26:16.023708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.383 [2024-07-15 13:26:16.023750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:19.383 [2024-07-15 13:26:16.033635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e5a90 00:27:19.383 [2024-07-15 13:26:16.035003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.383 [2024-07-15 13:26:16.035045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:19.383 [2024-07-15 13:26:16.047578] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190eee38 00:27:19.383 [2024-07-15 13:26:16.049606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.383 [2024-07-15 13:26:16.049646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:19.383 [2024-07-15 13:26:16.056237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190fc560 00:27:19.383 [2024-07-15 13:26:16.057240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.383 [2024-07-15 13:26:16.057285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:19.383 [2024-07-15 13:26:16.068748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190ec408 00:27:19.383 [2024-07-15 13:26:16.069941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.383 [2024-07-15 13:26:16.069983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:19.383 [2024-07-15 13:26:16.080915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f57b0 00:27:19.383 [2024-07-15 13:26:16.082084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.383 [2024-07-15 13:26:16.082124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:19.383 [2024-07-15 13:26:16.094417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190ebb98 00:27:19.383 [2024-07-15 13:26:16.096177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.383 [2024-07-15 13:26:16.096237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.105658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f7da8 00:27:19.646 [2024-07-15 13:26:16.106955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.107000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.117332] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e73e0 00:27:19.646 [2024-07-15 13:26:16.118558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.118606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.128389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190fe720 00:27:19.646 [2024-07-15 13:26:16.129420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.129458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.139787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190ee190 00:27:19.646 [2024-07-15 13:26:16.140680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.140719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.153866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f6458 00:27:19.646 [2024-07-15 13:26:16.154984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.155027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.164768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f2d80 00:27:19.646 [2024-07-15 13:26:16.166085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.166129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.176603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f0350 00:27:19.646 [2024-07-15 13:26:16.177853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.177892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.190937] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e4578 00:27:19.646 [2024-07-15 13:26:16.192904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.192942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.199563] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190ea680 00:27:19.646 [2024-07-15 13:26:16.200473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.200509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.213862] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e84c0 00:27:19.646 [2024-07-15 13:26:16.215549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.215590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.226030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e0a68 00:27:19.646 [2024-07-15 13:26:16.227669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.227707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.235979] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e73e0 00:27:19.646 [2024-07-15 13:26:16.236634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.236673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.248196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f46d0 00:27:19.646 [2024-07-15 13:26:16.249163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.249216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.259644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f96f8 00:27:19.646 [2024-07-15 13:26:16.260419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.260461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.273951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f4b08 00:27:19.646 [2024-07-15 13:26:16.275458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.275504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.283567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190eb760 00:27:19.646 [2024-07-15 13:26:16.284341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.284382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.297079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190ec408 00:27:19.646 [2024-07-15 13:26:16.298577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.298619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.309581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e4140 00:27:19.646 [2024-07-15 13:26:16.311225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.311266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.320281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f6020 00:27:19.646 [2024-07-15 13:26:16.322040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.322084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.333112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f4b08 00:27:19.646 [2024-07-15 13:26:16.334108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.334148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.343991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190ecc78 00:27:19.646 [2024-07-15 13:26:16.345120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.345161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.355820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190ed920 00:27:19.646 [2024-07-15 13:26:16.356977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.357017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.368008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f96f8 00:27:19.646 [2024-07-15 13:26:16.369119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.369158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:19.646 [2024-07-15 13:26:16.382059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190df988 00:27:19.646 [2024-07-15 13:26:16.383934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-07-15 13:26:16.383978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.905 [2024-07-15 13:26:16.390706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f2948 00:27:19.905 [2024-07-15 13:26:16.391500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.905 [2024-07-15 13:26:16.391538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:19.905 [2024-07-15 13:26:16.403230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f31b8 00:27:19.905 [2024-07-15 13:26:16.404195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.905 [2024-07-15 13:26:16.404243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:19.905 [2024-07-15 13:26:16.415424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f4298 00:27:19.905 [2024-07-15 13:26:16.416413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.905 [2024-07-15 13:26:16.416453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:19.905 [2024-07-15 13:26:16.429474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e23b8 00:27:19.905 [2024-07-15 13:26:16.430990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.905 [2024-07-15 13:26:16.431034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:19.905 [2024-07-15 13:26:16.440518] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190de8a8 00:27:19.905 [2024-07-15 13:26:16.441824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.905 [2024-07-15 13:26:16.441862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:19.905 [2024-07-15 13:26:16.450485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f6458 00:27:19.905 [2024-07-15 13:26:16.451276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.905 [2024-07-15 13:26:16.451314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:19.905 [2024-07-15 13:26:16.464826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e0630 00:27:19.905 [2024-07-15 13:26:16.466180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.905 [2024-07-15 13:26:16.466229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.906 [2024-07-15 13:26:16.476248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f31b8 00:27:19.906 [2024-07-15 13:26:16.477589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-07-15 13:26:16.477628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:19.906 [2024-07-15 13:26:16.488428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f4298 00:27:19.906 [2024-07-15 13:26:16.489738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-07-15 13:26:16.489777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:19.906 [2024-07-15 13:26:16.499827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e49b0 00:27:19.906 [2024-07-15 13:26:16.501000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-07-15 13:26:16.501040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:19.906 [2024-07-15 13:26:16.513818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f2510 00:27:19.906 [2024-07-15 13:26:16.515683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-07-15 13:26:16.515724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:19.906 [2024-07-15 13:26:16.525964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e7c50 00:27:19.906 [2024-07-15 13:26:16.527815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-07-15 13:26:16.527854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.906 [2024-07-15 13:26:16.537693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f1ca0 00:27:19.906 [2024-07-15 13:26:16.539551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-07-15 13:26:16.539590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.906 [2024-07-15 13:26:16.546284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190fa7d8 00:27:19.906 [2024-07-15 13:26:16.547087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-07-15 13:26:16.547127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:19.906 [2024-07-15 13:26:16.558795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190eee38 00:27:19.906 [2024-07-15 13:26:16.559775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-07-15 13:26:16.559815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:19.906 [2024-07-15 13:26:16.570988] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190fac10 00:27:19.906 [2024-07-15 13:26:16.571988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-07-15 13:26:16.572027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:19.906 [2024-07-15 13:26:16.583874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e27f0 00:27:19.906 [2024-07-15 13:26:16.585000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-07-15 13:26:16.585040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:19.906 [2024-07-15 13:26:16.595838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e7c50 00:27:19.906 [2024-07-15 13:26:16.596624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-07-15 13:26:16.596665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.906 [2024-07-15 13:26:16.607286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190eaab8 00:27:19.906 [2024-07-15 13:26:16.607933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-07-15 13:26:16.607977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:19.906 [2024-07-15 13:26:16.621366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e38d0 00:27:19.906 [2024-07-15 13:26:16.623221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-07-15 13:26:16.623263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:19.906 [2024-07-15 13:26:16.633545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190fa3a0 00:27:19.906 [2024-07-15 13:26:16.635371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-07-15 13:26:16.635413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:19.906 [2024-07-15 13:26:16.643452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e95a0 00:27:20.165 [2024-07-15 13:26:16.644270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.165 [2024-07-15 13:26:16.644309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:20.165 [2024-07-15 13:26:16.655677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f8618 00:27:20.165 [2024-07-15 13:26:16.656827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.165 [2024-07-15 13:26:16.656867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:20.165 [2024-07-15 13:26:16.668933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190df550 00:27:20.165 [2024-07-15 13:26:16.670621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.165 [2024-07-15 13:26:16.670659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:20.165 [2024-07-15 13:26:16.678571] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f6890 00:27:20.166 [2024-07-15 13:26:16.679531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.166 [2024-07-15 13:26:16.679569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:20.166 [2024-07-15 13:26:16.690319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190ea248 00:27:20.166 [2024-07-15 13:26:16.691299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.166 [2024-07-15 13:26:16.691339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:20.166 [2024-07-15 13:26:16.702873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190ebb98 00:27:20.166 [2024-07-15 13:26:16.704027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.166 [2024-07-15 13:26:16.704068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:20.166 [2024-07-15 13:26:16.715165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f8a50 00:27:20.166 [2024-07-15 13:26:16.716325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.166 [2024-07-15 13:26:16.716364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:20.166 [2024-07-15 13:26:16.727004] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190eff18 00:27:20.166 [2024-07-15 13:26:16.728126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.166 [2024-07-15 13:26:16.728166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:20.166 [2024-07-15 13:26:16.741431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e4140 00:27:20.166 [2024-07-15 13:26:16.743260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.166 [2024-07-15 13:26:16.743303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:20.166 [2024-07-15 13:26:16.749982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e1f80 00:27:20.166 [2024-07-15 13:26:16.750802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.166 [2024-07-15 13:26:16.750848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:20.166 [2024-07-15 13:26:16.762552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e99d8 00:27:20.166 [2024-07-15 13:26:16.763541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.166 [2024-07-15 13:26:16.763580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:20.166 [2024-07-15 13:26:16.774767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e12d8 00:27:20.166 [2024-07-15 13:26:16.775762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.166 [2024-07-15 13:26:16.775801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:20.166 [2024-07-15 13:26:16.788341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e7c50 00:27:20.166 [2024-07-15 13:26:16.789901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.166 [2024-07-15 13:26:16.789939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:20.166 [2024-07-15 13:26:16.799681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e9e10 00:27:20.166 [2024-07-15 13:26:16.800772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.166 [2024-07-15 13:26:16.800814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:20.166 [2024-07-15 13:26:16.811505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e23b8 00:27:20.166 [2024-07-15 13:26:16.812696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.166 [2024-07-15 13:26:16.812736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:20.166 [2024-07-15 13:26:16.823521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e3d08 00:27:20.166 [2024-07-15 13:26:16.824189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.166 [2024-07-15 13:26:16.824238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:20.166 [2024-07-15 13:26:16.838372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f96f8 00:27:20.166 [2024-07-15 13:26:16.840520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.166 [2024-07-15 13:26:16.840565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:20.166 [2024-07-15 13:26:16.847073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190eb760 00:27:20.166 [2024-07-15 13:26:16.847967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.166 [2024-07-15 13:26:16.848007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:20.166 [2024-07-15 13:26:16.861397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f0788 00:27:20.166 [2024-07-15 13:26:16.862950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.166 [2024-07-15 13:26:16.862993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:20.166 [2024-07-15 13:26:16.872847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190fb048 00:27:20.166 [2024-07-15 13:26:16.874247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.166 [2024-07-15 13:26:16.874286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:20.166 [2024-07-15 13:26:16.884844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190df550 00:27:20.166 [2024-07-15 13:26:16.886436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.166 [2024-07-15 13:26:16.886476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:20.166 [2024-07-15 13:26:16.897476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e49b0 00:27:20.166 [2024-07-15 13:26:16.899252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.166 [2024-07-15 13:26:16.899294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:16.908759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f20d8 00:27:20.425 [2024-07-15 13:26:16.910062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.425 [2024-07-15 13:26:16.910103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:16.920554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190ebfd0 00:27:20.425 [2024-07-15 13:26:16.921981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.425 [2024-07-15 13:26:16.922025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:16.931955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f35f0 00:27:20.425 [2024-07-15 13:26:16.933005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.425 [2024-07-15 13:26:16.933049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:16.943843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e0a68 00:27:20.425 [2024-07-15 13:26:16.944759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.425 [2024-07-15 13:26:16.944800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:16.959301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f96f8 00:27:20.425 [2024-07-15 13:26:16.961467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.425 [2024-07-15 13:26:16.961513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:16.968048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190eb760 00:27:20.425 [2024-07-15 13:26:16.969145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.425 [2024-07-15 13:26:16.969185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:16.980611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190ee5c8 00:27:20.425 [2024-07-15 13:26:16.981888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.425 [2024-07-15 13:26:16.981927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:16.992715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f7538 00:27:20.425 [2024-07-15 13:26:16.993473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.425 [2024-07-15 13:26:16.993515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:17.004364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f7538 00:27:20.425 [2024-07-15 13:26:17.004988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.425 [2024-07-15 13:26:17.005021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:17.018082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f96f8 00:27:20.425 [2024-07-15 13:26:17.019554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.425 [2024-07-15 13:26:17.019598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:17.028792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f6cc8 00:27:20.425 [2024-07-15 13:26:17.030644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.425 [2024-07-15 13:26:17.030686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:17.039341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e95a0 00:27:20.425 [2024-07-15 13:26:17.040079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.425 [2024-07-15 13:26:17.040119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:17.053762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190fac10 00:27:20.425 [2024-07-15 13:26:17.055251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.425 [2024-07-15 13:26:17.055288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:17.065786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e6fa8 00:27:20.425 [2024-07-15 13:26:17.066741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.425 [2024-07-15 13:26:17.066796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:17.077024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f6458 00:27:20.425 [2024-07-15 13:26:17.078869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.425 [2024-07-15 13:26:17.078916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:17.087657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f46d0 00:27:20.425 [2024-07-15 13:26:17.088436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.425 [2024-07-15 13:26:17.088474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:17.099943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e4140 00:27:20.425 [2024-07-15 13:26:17.100763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.425 [2024-07-15 13:26:17.100804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:17.114168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e6b70 00:27:20.425 [2024-07-15 13:26:17.115712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.425 [2024-07-15 13:26:17.115754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:17.125613] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f5be8 00:27:20.425 [2024-07-15 13:26:17.126694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.425 [2024-07-15 13:26:17.126735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:17.137395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190ebb98 00:27:20.425 [2024-07-15 13:26:17.138394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.425 [2024-07-15 13:26:17.138435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:17.150698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190df118 00:27:20.425 [2024-07-15 13:26:17.152269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.425 [2024-07-15 13:26:17.152309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:20.425 [2024-07-15 13:26:17.162548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f5be8 00:27:20.683 [2024-07-15 13:26:17.163545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.683 [2024-07-15 13:26:17.163586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:20.683 [2024-07-15 13:26:17.174109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190efae0 00:27:20.683 [2024-07-15 13:26:17.175035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.683 [2024-07-15 13:26:17.175083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:20.683 [2024-07-15 13:26:17.185861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e3498 00:27:20.683 [2024-07-15 13:26:17.186531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.683 [2024-07-15 13:26:17.186570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:20.684 [2024-07-15 13:26:17.200368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e8d30 00:27:20.684 [2024-07-15 13:26:17.202289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.684 [2024-07-15 13:26:17.202336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:20.684 [2024-07-15 13:26:17.209221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f3e60 00:27:20.684 [2024-07-15 13:26:17.210055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.684 [2024-07-15 13:26:17.210095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:20.684 [2024-07-15 13:26:17.221456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f20d8 00:27:20.684 [2024-07-15 13:26:17.222290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.684 [2024-07-15 13:26:17.222331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:20.684 [2024-07-15 13:26:17.233230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f6020 00:27:20.684 [2024-07-15 13:26:17.234023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.684 [2024-07-15 13:26:17.234062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:20.684 [2024-07-15 13:26:17.245429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190fd208 00:27:20.684 [2024-07-15 13:26:17.246242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.684 [2024-07-15 13:26:17.246284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:20.684 [2024-07-15 13:26:17.259748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e6738 00:27:20.684 [2024-07-15 13:26:17.261288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.684 [2024-07-15 13:26:17.261328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:20.684 [2024-07-15 13:26:17.271740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f92c0 00:27:20.684 [2024-07-15 13:26:17.272730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.684 [2024-07-15 13:26:17.272769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:20.684 [2024-07-15 13:26:17.283647] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190eee38 00:27:20.684 [2024-07-15 13:26:17.284971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.684 [2024-07-15 13:26:17.285010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:20.684 [2024-07-15 13:26:17.295113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190ee190 00:27:20.684 [2024-07-15 13:26:17.296550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.684 [2024-07-15 13:26:17.296594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:20.684 [2024-07-15 13:26:17.307592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e1f80 00:27:20.684 [2024-07-15 13:26:17.308976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.684 [2024-07-15 13:26:17.309016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:20.684 [2024-07-15 13:26:17.319187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e95a0 00:27:20.684 [2024-07-15 13:26:17.320429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.684 [2024-07-15 13:26:17.320473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:20.684 [2024-07-15 13:26:17.331087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190ef270 00:27:20.684 [2024-07-15 13:26:17.332332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.684 [2024-07-15 13:26:17.332374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:20.684 [2024-07-15 13:26:17.345550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f2948 00:27:20.684 [2024-07-15 13:26:17.347443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.684 [2024-07-15 13:26:17.347495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:20.684 [2024-07-15 13:26:17.354099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190eea00 00:27:20.684 [2024-07-15 13:26:17.354969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.684 [2024-07-15 13:26:17.355007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:20.684 [2024-07-15 13:26:17.366430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190dfdc0 00:27:20.684 [2024-07-15 13:26:17.367310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.684 [2024-07-15 13:26:17.367349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:20.684 [2024-07-15 13:26:17.379992] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f7538 00:27:20.684 [2024-07-15 13:26:17.381437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.684 [2024-07-15 13:26:17.381476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:20.684 [2024-07-15 13:26:17.392107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190eaef0 00:27:20.684 [2024-07-15 13:26:17.393012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.684 [2024-07-15 13:26:17.393053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:20.684 [2024-07-15 13:26:17.404183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190dece0 00:27:20.684 [2024-07-15 13:26:17.405467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.684 [2024-07-15 13:26:17.405509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:20.684 [2024-07-15 13:26:17.415808] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f46d0 00:27:20.684 [2024-07-15 13:26:17.417116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.684 [2024-07-15 13:26:17.417160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:20.943 [2024-07-15 13:26:17.428677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e6fa8 00:27:20.943 [2024-07-15 13:26:17.430142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.943 [2024-07-15 13:26:17.430185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:20.943 [2024-07-15 13:26:17.443436] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f3e60 00:27:20.943 [2024-07-15 13:26:17.445675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.943 [2024-07-15 13:26:17.445720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.943 [2024-07-15 13:26:17.452257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f96f8 00:27:20.943 [2024-07-15 13:26:17.453147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.943 [2024-07-15 13:26:17.453186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:20.943 [2024-07-15 13:26:17.463734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190eff18 00:27:20.943 [2024-07-15 13:26:17.464639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.943 [2024-07-15 13:26:17.464678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:20.943 [2024-07-15 13:26:17.478180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190eee38 00:27:20.943 [2024-07-15 13:26:17.479902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.943 [2024-07-15 13:26:17.479949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:20.943 [2024-07-15 13:26:17.489791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190fe720 00:27:20.943 [2024-07-15 13:26:17.491094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.943 [2024-07-15 13:26:17.491143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:20.943 [2024-07-15 13:26:17.501878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e88f8 00:27:20.943 [2024-07-15 13:26:17.503270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.943 [2024-07-15 13:26:17.503313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:20.943 [2024-07-15 13:26:17.514660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190ecc78 00:27:20.943 [2024-07-15 13:26:17.516218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.943 [2024-07-15 13:26:17.516259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:20.943 [2024-07-15 13:26:17.526936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f5be8 00:27:20.943 [2024-07-15 13:26:17.528443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.943 [2024-07-15 13:26:17.528486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:20.943 [2024-07-15 13:26:17.538814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190ea680 00:27:20.943 [2024-07-15 13:26:17.539776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.943 [2024-07-15 13:26:17.539817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:20.943 [2024-07-15 13:26:17.550676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f6020 00:27:20.943 [2024-07-15 13:26:17.551989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.943 [2024-07-15 13:26:17.552037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:20.943 [2024-07-15 13:26:17.562680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190eb760 00:27:20.943 [2024-07-15 13:26:17.564085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.943 [2024-07-15 13:26:17.564133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:20.943 [2024-07-15 13:26:17.575247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190fb480 00:27:20.943 [2024-07-15 13:26:17.576608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.943 [2024-07-15 13:26:17.576652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:20.943 [2024-07-15 13:26:17.587291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190f4298 00:27:20.943 [2024-07-15 13:26:17.588100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.943 [2024-07-15 13:26:17.588143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:20.943 [2024-07-15 13:26:17.599868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e0ea0 00:27:20.943 [2024-07-15 13:26:17.600850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.943 [2024-07-15 13:26:17.600891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:20.943 [2024-07-15 13:26:17.611588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190fbcf0 00:27:20.943 [2024-07-15 13:26:17.612825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.943 [2024-07-15 13:26:17.612868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:20.943 [2024-07-15 13:26:17.623491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc119f0) with pdu=0x2000190e84c0 00:27:20.944 [2024-07-15 13:26:17.624687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.944 [2024-07-15 13:26:17.624733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:20.944 00:27:20.944 Latency(us) 00:27:20.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.944 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:20.944 nvme0n1 : 2.01 21079.76 82.34 0.00 0.00 6065.99 2398.02 16801.05 00:27:20.944 =================================================================================================================== 00:27:20.944 Total : 21079.76 82.34 0.00 0.00 6065.99 2398.02 16801.05 00:27:20.944 0 00:27:20.944 13:26:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:20.944 13:26:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:20.944 | .driver_specific 00:27:20.944 | .nvme_error 00:27:20.944 | .status_code 00:27:20.944 | .command_transient_transport_error' 00:27:20.944 13:26:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:20.944 13:26:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:21.510 13:26:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 165 > 0 )) 00:27:21.510 13:26:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112189 00:27:21.510 13:26:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 112189 ']' 00:27:21.510 13:26:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 112189 00:27:21.510 13:26:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:21.510 13:26:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:21.510 13:26:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112189 00:27:21.510 killing process with pid 112189 00:27:21.510 Received shutdown signal, test time was about 2.000000 seconds 00:27:21.510 00:27:21.510 Latency(us) 00:27:21.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.510 =================================================================================================================== 00:27:21.510 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:21.510 13:26:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:21.510 13:26:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:21.510 13:26:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112189' 00:27:21.510 13:26:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 112189 00:27:21.510 13:26:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 112189 00:27:21.510 13:26:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:21.510 13:26:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:21.510 13:26:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:21.510 13:26:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:21.510 13:26:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:21.510 13:26:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112284 00:27:21.510 13:26:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112284 /var/tmp/bperf.sock 00:27:21.510 13:26:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:21.510 13:26:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 112284 ']' 00:27:21.510 13:26:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:21.510 13:26:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:21.510 13:26:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:21.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:21.510 13:26:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:21.510 13:26:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:21.510 [2024-07-15 13:26:18.246908] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:21.510 [2024-07-15 13:26:18.247304] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112284 ] 00:27:21.510 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:21.510 Zero copy mechanism will not be used. 00:27:21.768 [2024-07-15 13:26:18.396014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.769 [2024-07-15 13:26:18.495538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.703 13:26:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:22.703 13:26:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:22.703 13:26:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:22.703 13:26:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:22.962 13:26:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:22.962 13:26:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.962 13:26:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:22.962 13:26:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.962 13:26:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:22.962 13:26:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:23.220 nvme0n1 00:27:23.220 13:26:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:23.220 13:26:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.220 13:26:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:23.220 13:26:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.220 13:26:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:23.220 13:26:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:23.220 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:23.220 Zero copy mechanism will not be used. 00:27:23.220 Running I/O for 2 seconds... 00:27:23.220 [2024-07-15 13:26:19.930492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.220 [2024-07-15 13:26:19.930834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.220 [2024-07-15 13:26:19.930872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.220 [2024-07-15 13:26:19.935682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.220 [2024-07-15 13:26:19.935972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.220 [2024-07-15 13:26:19.936014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.220 [2024-07-15 13:26:19.940983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.220 [2024-07-15 13:26:19.941288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.220 [2024-07-15 13:26:19.941328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.220 [2024-07-15 13:26:19.946158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.220 [2024-07-15 13:26:19.946464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.220 [2024-07-15 13:26:19.946511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.220 [2024-07-15 13:26:19.951327] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.220 [2024-07-15 13:26:19.951629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.220 [2024-07-15 13:26:19.951664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.220 [2024-07-15 13:26:19.956527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.220 [2024-07-15 13:26:19.956816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.220 [2024-07-15 13:26:19.956852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.479 [2024-07-15 13:26:19.961639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.479 [2024-07-15 13:26:19.961930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.479 [2024-07-15 13:26:19.961969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.479 [2024-07-15 13:26:19.966863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.479 [2024-07-15 13:26:19.967167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.479 [2024-07-15 13:26:19.967213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.479 [2024-07-15 13:26:19.972061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.479 [2024-07-15 13:26:19.972361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.479 [2024-07-15 13:26:19.972393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.479 [2024-07-15 13:26:19.977235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.479 [2024-07-15 13:26:19.977524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.479 [2024-07-15 13:26:19.977559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.479 [2024-07-15 13:26:19.982355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.479 [2024-07-15 13:26:19.982652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.479 [2024-07-15 13:26:19.982686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.479 [2024-07-15 13:26:19.987503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.479 [2024-07-15 13:26:19.987789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.479 [2024-07-15 13:26:19.987831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.479 [2024-07-15 13:26:19.992628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.479 [2024-07-15 13:26:19.992927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.479 [2024-07-15 13:26:19.992959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.479 [2024-07-15 13:26:19.997808] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.479 [2024-07-15 13:26:19.998104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.479 [2024-07-15 13:26:19.998142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.479 [2024-07-15 13:26:20.003046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.479 [2024-07-15 13:26:20.003359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.479 [2024-07-15 13:26:20.003397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.479 [2024-07-15 13:26:20.008249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.479 [2024-07-15 13:26:20.008539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.479 [2024-07-15 13:26:20.008571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.479 [2024-07-15 13:26:20.013446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.479 [2024-07-15 13:26:20.013751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.479 [2024-07-15 13:26:20.013785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.479 [2024-07-15 13:26:20.018582] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.479 [2024-07-15 13:26:20.018883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.479 [2024-07-15 13:26:20.018928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.479 [2024-07-15 13:26:20.023700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.479 [2024-07-15 13:26:20.023988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.479 [2024-07-15 13:26:20.024020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.479 [2024-07-15 13:26:20.028831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.479 [2024-07-15 13:26:20.029133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.479 [2024-07-15 13:26:20.029168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.479 [2024-07-15 13:26:20.033933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.479 [2024-07-15 13:26:20.034231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.479 [2024-07-15 13:26:20.034262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.479 [2024-07-15 13:26:20.039053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.479 [2024-07-15 13:26:20.039357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.039390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.044134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.044442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.044474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.049301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.049607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.049644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.054490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.054809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.054840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.059565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.059856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.059888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.064636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.064929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.064967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.069782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.070075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.070098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.074961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.075269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.075306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.080013] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.080311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.080334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.085126] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.085426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.085461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.090239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.090524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.090561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.095370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.095683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.095720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.100585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.100873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.100907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.105668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.105957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.105989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.110811] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.111100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.111137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.115951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.116256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.116288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.121008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.121309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.121337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.126091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.126393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.126434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.131218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.131509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.131542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.136286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.136573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.136607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.141391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.141683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.141707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.146508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.146807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.146839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.151640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.151929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 13:26:20.151961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 13:26:20.156769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 13:26:20.157070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 13:26:20.157102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 13:26:20.161875] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 13:26:20.162162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 13:26:20.162197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 13:26:20.167042] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 13:26:20.167355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 13:26:20.167383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 13:26:20.172143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 13:26:20.172444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 13:26:20.172477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 13:26:20.177261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 13:26:20.177548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 13:26:20.177580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 13:26:20.182363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 13:26:20.182654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 13:26:20.182683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 13:26:20.187523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 13:26:20.187813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 13:26:20.187845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 13:26:20.192625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 13:26:20.192915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 13:26:20.192947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 13:26:20.197708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 13:26:20.197994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 13:26:20.198029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 13:26:20.202779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 13:26:20.203069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 13:26:20.203150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 13:26:20.207963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 13:26:20.208262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 13:26:20.208286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 13:26:20.213043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 13:26:20.213342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 13:26:20.213366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 13:26:20.218138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 13:26:20.218440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 13:26:20.218472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 13:26:20.223332] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 13:26:20.223624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 13:26:20.223646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 13:26:20.228478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 13:26:20.228768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 13:26:20.228800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 13:26:20.233609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 13:26:20.233899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 13:26:20.233940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 13:26:20.238809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 13:26:20.239118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 13:26:20.239153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 13:26:20.244016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 13:26:20.244319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 13:26:20.244346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 13:26:20.249138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 13:26:20.249442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 13:26:20.249477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 13:26:20.254264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 13:26:20.254555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 13:26:20.254592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 13:26:20.259401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 13:26:20.259694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 13:26:20.259726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 13:26:20.264513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 13:26:20.264816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 13:26:20.264847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 13:26:20.269726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 13:26:20.270013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 13:26:20.270047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 13:26:20.274898] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 13:26:20.275200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 13:26:20.275246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 13:26:20.280024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 13:26:20.280325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 13:26:20.280369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 13:26:20.285152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 13:26:20.285460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 13:26:20.285492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 13:26:20.290301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.742 [2024-07-15 13:26:20.290594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.742 [2024-07-15 13:26:20.290631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.742 [2024-07-15 13:26:20.295416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.742 [2024-07-15 13:26:20.295710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.742 [2024-07-15 13:26:20.295747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.742 [2024-07-15 13:26:20.300517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.742 [2024-07-15 13:26:20.300809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.742 [2024-07-15 13:26:20.300849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.742 [2024-07-15 13:26:20.305611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.742 [2024-07-15 13:26:20.305900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.742 [2024-07-15 13:26:20.305931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.742 [2024-07-15 13:26:20.310723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.742 [2024-07-15 13:26:20.311020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.742 [2024-07-15 13:26:20.311051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.742 [2024-07-15 13:26:20.315858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.742 [2024-07-15 13:26:20.316148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.742 [2024-07-15 13:26:20.316178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.742 [2024-07-15 13:26:20.320950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.742 [2024-07-15 13:26:20.321253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.742 [2024-07-15 13:26:20.321284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.742 [2024-07-15 13:26:20.326053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.742 [2024-07-15 13:26:20.326358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.742 [2024-07-15 13:26:20.326393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.742 [2024-07-15 13:26:20.331186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.742 [2024-07-15 13:26:20.331488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.742 [2024-07-15 13:26:20.331513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.742 [2024-07-15 13:26:20.336321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.742 [2024-07-15 13:26:20.336611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.742 [2024-07-15 13:26:20.336648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.742 [2024-07-15 13:26:20.341497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.742 [2024-07-15 13:26:20.341804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.742 [2024-07-15 13:26:20.341843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.742 [2024-07-15 13:26:20.346678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.742 [2024-07-15 13:26:20.347000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.742 [2024-07-15 13:26:20.347038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.742 [2024-07-15 13:26:20.351886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.742 [2024-07-15 13:26:20.352179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.742 [2024-07-15 13:26:20.352230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.742 [2024-07-15 13:26:20.357032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.742 [2024-07-15 13:26:20.357333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.742 [2024-07-15 13:26:20.357366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.742 [2024-07-15 13:26:20.362096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.742 [2024-07-15 13:26:20.362400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.742 [2024-07-15 13:26:20.362432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.742 [2024-07-15 13:26:20.367184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.742 [2024-07-15 13:26:20.367505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.742 [2024-07-15 13:26:20.367547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.742 [2024-07-15 13:26:20.372329] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.742 [2024-07-15 13:26:20.372621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.742 [2024-07-15 13:26:20.372665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.742 [2024-07-15 13:26:20.377473] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.742 [2024-07-15 13:26:20.377771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.742 [2024-07-15 13:26:20.377805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.742 [2024-07-15 13:26:20.382576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.742 [2024-07-15 13:26:20.382879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.742 [2024-07-15 13:26:20.382911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.742 [2024-07-15 13:26:20.387711] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.742 [2024-07-15 13:26:20.388005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.742 [2024-07-15 13:26:20.388045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.742 [2024-07-15 13:26:20.392858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.742 [2024-07-15 13:26:20.393165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.742 [2024-07-15 13:26:20.393216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.742 [2024-07-15 13:26:20.397966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.743 [2024-07-15 13:26:20.398286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.743 [2024-07-15 13:26:20.398318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.743 [2024-07-15 13:26:20.403163] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.743 [2024-07-15 13:26:20.403476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.743 [2024-07-15 13:26:20.403512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.743 [2024-07-15 13:26:20.408303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.743 [2024-07-15 13:26:20.408595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.743 [2024-07-15 13:26:20.408626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.743 [2024-07-15 13:26:20.413426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.743 [2024-07-15 13:26:20.413716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.743 [2024-07-15 13:26:20.413747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.743 [2024-07-15 13:26:20.418531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.743 [2024-07-15 13:26:20.418829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.743 [2024-07-15 13:26:20.418861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.743 [2024-07-15 13:26:20.423647] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.743 [2024-07-15 13:26:20.423951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.743 [2024-07-15 13:26:20.423983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.743 [2024-07-15 13:26:20.428877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.743 [2024-07-15 13:26:20.429172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.743 [2024-07-15 13:26:20.429216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.743 [2024-07-15 13:26:20.433985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.743 [2024-07-15 13:26:20.434286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.743 [2024-07-15 13:26:20.434309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.743 [2024-07-15 13:26:20.439142] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.743 [2024-07-15 13:26:20.439450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.743 [2024-07-15 13:26:20.439484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.743 [2024-07-15 13:26:20.444271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.743 [2024-07-15 13:26:20.444561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.743 [2024-07-15 13:26:20.444592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.743 [2024-07-15 13:26:20.449424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.743 [2024-07-15 13:26:20.449714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.743 [2024-07-15 13:26:20.449746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.743 [2024-07-15 13:26:20.454500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.743 [2024-07-15 13:26:20.454800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.743 [2024-07-15 13:26:20.454836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.743 [2024-07-15 13:26:20.459616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.743 [2024-07-15 13:26:20.459909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.743 [2024-07-15 13:26:20.459940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.743 [2024-07-15 13:26:20.464743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.743 [2024-07-15 13:26:20.465045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.743 [2024-07-15 13:26:20.465078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.743 [2024-07-15 13:26:20.469883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.743 [2024-07-15 13:26:20.470186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.743 [2024-07-15 13:26:20.470239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.743 [2024-07-15 13:26:20.475071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:23.743 [2024-07-15 13:26:20.475380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.743 [2024-07-15 13:26:20.475417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.003 [2024-07-15 13:26:20.480255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.003 [2024-07-15 13:26:20.480545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.003 [2024-07-15 13:26:20.480577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.003 [2024-07-15 13:26:20.485353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.003 [2024-07-15 13:26:20.485643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.003 [2024-07-15 13:26:20.485683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.003 [2024-07-15 13:26:20.490498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.003 [2024-07-15 13:26:20.490809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.003 [2024-07-15 13:26:20.490853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.003 [2024-07-15 13:26:20.495640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.003 [2024-07-15 13:26:20.495928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.003 [2024-07-15 13:26:20.495963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.003 [2024-07-15 13:26:20.500762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.003 [2024-07-15 13:26:20.501053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.003 [2024-07-15 13:26:20.501082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.003 [2024-07-15 13:26:20.505940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.003 [2024-07-15 13:26:20.506259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.003 [2024-07-15 13:26:20.506291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.003 [2024-07-15 13:26:20.511121] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.003 [2024-07-15 13:26:20.511423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.003 [2024-07-15 13:26:20.511455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.003 [2024-07-15 13:26:20.516184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.003 [2024-07-15 13:26:20.516487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.003 [2024-07-15 13:26:20.516519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.003 [2024-07-15 13:26:20.521299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.003 [2024-07-15 13:26:20.521600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.003 [2024-07-15 13:26:20.521631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.003 [2024-07-15 13:26:20.526420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.003 [2024-07-15 13:26:20.526709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.003 [2024-07-15 13:26:20.526740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.003 [2024-07-15 13:26:20.531525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.003 [2024-07-15 13:26:20.531812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.003 [2024-07-15 13:26:20.531849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.003 [2024-07-15 13:26:20.536620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.003 [2024-07-15 13:26:20.536911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.003 [2024-07-15 13:26:20.536947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.003 [2024-07-15 13:26:20.541763] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.003 [2024-07-15 13:26:20.542051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.003 [2024-07-15 13:26:20.542075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.003 [2024-07-15 13:26:20.546964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.003 [2024-07-15 13:26:20.547285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.003 [2024-07-15 13:26:20.547318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.003 [2024-07-15 13:26:20.552135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.003 [2024-07-15 13:26:20.552436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.003 [2024-07-15 13:26:20.552468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.003 [2024-07-15 13:26:20.557233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.003 [2024-07-15 13:26:20.557533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.003 [2024-07-15 13:26:20.557574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.003 [2024-07-15 13:26:20.562386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.003 [2024-07-15 13:26:20.562686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.003 [2024-07-15 13:26:20.562717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.003 [2024-07-15 13:26:20.567500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.003 [2024-07-15 13:26:20.567787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.003 [2024-07-15 13:26:20.567818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.003 [2024-07-15 13:26:20.572626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.003 [2024-07-15 13:26:20.572915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.003 [2024-07-15 13:26:20.572946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.003 [2024-07-15 13:26:20.577688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.003 [2024-07-15 13:26:20.577982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.003 [2024-07-15 13:26:20.578013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.003 [2024-07-15 13:26:20.582827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.583118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.004 [2024-07-15 13:26:20.583155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.004 [2024-07-15 13:26:20.587962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.588281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.004 [2024-07-15 13:26:20.588309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.004 [2024-07-15 13:26:20.593054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.593357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.004 [2024-07-15 13:26:20.593394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.004 [2024-07-15 13:26:20.598143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.598440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.004 [2024-07-15 13:26:20.598473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.004 [2024-07-15 13:26:20.603292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.603580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.004 [2024-07-15 13:26:20.603607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.004 [2024-07-15 13:26:20.608442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.608730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.004 [2024-07-15 13:26:20.608762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.004 [2024-07-15 13:26:20.613537] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.613826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.004 [2024-07-15 13:26:20.613857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.004 [2024-07-15 13:26:20.618618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.618916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.004 [2024-07-15 13:26:20.618948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.004 [2024-07-15 13:26:20.623702] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.624000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.004 [2024-07-15 13:26:20.624041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.004 [2024-07-15 13:26:20.628868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.629153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.004 [2024-07-15 13:26:20.629194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.004 [2024-07-15 13:26:20.633965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.634263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.004 [2024-07-15 13:26:20.634286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.004 [2024-07-15 13:26:20.639111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.639429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.004 [2024-07-15 13:26:20.639469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.004 [2024-07-15 13:26:20.644243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.644533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.004 [2024-07-15 13:26:20.644572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.004 [2024-07-15 13:26:20.649390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.649684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.004 [2024-07-15 13:26:20.649718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.004 [2024-07-15 13:26:20.654458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.654757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.004 [2024-07-15 13:26:20.654793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.004 [2024-07-15 13:26:20.659566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.659855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.004 [2024-07-15 13:26:20.659886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.004 [2024-07-15 13:26:20.664681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.664970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.004 [2024-07-15 13:26:20.665000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.004 [2024-07-15 13:26:20.669770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.670068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.004 [2024-07-15 13:26:20.670109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.004 [2024-07-15 13:26:20.674905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.675195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.004 [2024-07-15 13:26:20.675242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.004 [2024-07-15 13:26:20.679999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.680300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.004 [2024-07-15 13:26:20.680328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.004 [2024-07-15 13:26:20.685142] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.685468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.004 [2024-07-15 13:26:20.685509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.004 [2024-07-15 13:26:20.690320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.004 [2024-07-15 13:26:20.690618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.005 [2024-07-15 13:26:20.690649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.005 [2024-07-15 13:26:20.695441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.005 [2024-07-15 13:26:20.695738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.005 [2024-07-15 13:26:20.695771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.005 [2024-07-15 13:26:20.700599] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.005 [2024-07-15 13:26:20.700890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.005 [2024-07-15 13:26:20.700924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.005 [2024-07-15 13:26:20.705701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.005 [2024-07-15 13:26:20.706000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.005 [2024-07-15 13:26:20.706031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.005 [2024-07-15 13:26:20.710823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.005 [2024-07-15 13:26:20.711111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.005 [2024-07-15 13:26:20.711147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.005 [2024-07-15 13:26:20.715893] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.005 [2024-07-15 13:26:20.716198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.005 [2024-07-15 13:26:20.716242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.005 [2024-07-15 13:26:20.720993] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.005 [2024-07-15 13:26:20.721292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.005 [2024-07-15 13:26:20.721316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.005 [2024-07-15 13:26:20.726108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.005 [2024-07-15 13:26:20.726408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.005 [2024-07-15 13:26:20.726442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.005 [2024-07-15 13:26:20.731231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.005 [2024-07-15 13:26:20.731529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.005 [2024-07-15 13:26:20.731562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.005 [2024-07-15 13:26:20.736351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.005 [2024-07-15 13:26:20.736659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.005 [2024-07-15 13:26:20.736691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.265 [2024-07-15 13:26:20.741514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.265 [2024-07-15 13:26:20.741806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.265 [2024-07-15 13:26:20.741840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.265 [2024-07-15 13:26:20.746629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.265 [2024-07-15 13:26:20.746925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.265 [2024-07-15 13:26:20.746956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.265 [2024-07-15 13:26:20.751809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.265 [2024-07-15 13:26:20.752121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.265 [2024-07-15 13:26:20.752155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.265 [2024-07-15 13:26:20.756938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.265 [2024-07-15 13:26:20.757254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.265 [2024-07-15 13:26:20.757288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.265 [2024-07-15 13:26:20.762085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.265 [2024-07-15 13:26:20.762385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.265 [2024-07-15 13:26:20.762416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.265 [2024-07-15 13:26:20.767220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.265 [2024-07-15 13:26:20.767509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.265 [2024-07-15 13:26:20.767539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.265 [2024-07-15 13:26:20.772306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.265 [2024-07-15 13:26:20.772597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.265 [2024-07-15 13:26:20.772628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.265 [2024-07-15 13:26:20.777406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.265 [2024-07-15 13:26:20.777690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.265 [2024-07-15 13:26:20.777722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.265 [2024-07-15 13:26:20.782499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.265 [2024-07-15 13:26:20.782802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.265 [2024-07-15 13:26:20.782844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.265 [2024-07-15 13:26:20.787637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.265 [2024-07-15 13:26:20.787923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.265 [2024-07-15 13:26:20.787955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.792742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.793032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.793065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.797924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.798227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.798260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.803082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.803387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.803421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.808252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.808550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.808583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.813371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.813659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.813691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.818481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.818794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.818832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.823710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.823998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.824034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.828814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.829105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.829151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.834009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.834316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.834359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.839129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.839437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.839474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.844302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.844621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.844658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.849464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.849754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.849796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.854592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.854892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.854932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.859718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.860014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.860039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.864848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.865156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.865196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.869919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.870221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.870254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.875022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.875334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.875366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.880122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.880424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.880461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.885278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.885575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.885607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.890417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.890708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.890738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.895549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.895844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.895867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.900656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.900941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.900982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.905777] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.906063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.906098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.910923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.911244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.911281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.916029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.916330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.916367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.921186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.921494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.921526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.926310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.926603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.926638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.931450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.931755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.931781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.936606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.936913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.936940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.941857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.942159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.942185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.946966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.947290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.947320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.952189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.952511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.952554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.957449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.957766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.957793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.962613] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.962921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.962956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.967704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.967998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.968042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.972828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.973119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.973152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.977901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.978196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.978232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.983029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.983345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.983372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.988164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.988476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.988514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.993345] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.993657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.993695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.266 [2024-07-15 13:26:20.998515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.266 [2024-07-15 13:26:20.998821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.266 [2024-07-15 13:26:20.998853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.525 [2024-07-15 13:26:21.003687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.525 [2024-07-15 13:26:21.003999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.525 [2024-07-15 13:26:21.004038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.525 [2024-07-15 13:26:21.008823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.525 [2024-07-15 13:26:21.009121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.525 [2024-07-15 13:26:21.009155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.525 [2024-07-15 13:26:21.013954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.014260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.014285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.019045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.019349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.019381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.024148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.024458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.024492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.029281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.029590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.029630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.034477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.034778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.034874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.039663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.039971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.040006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.044792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.045087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.045120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.049871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.050175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.050202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.055009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.055321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.055356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.060223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.060521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.060556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.065385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.065685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.065726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.070505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.070830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.070865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.075695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.075996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.076029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.080844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.081156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.081196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.086038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.086359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.086385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.091186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.091504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.091562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.096388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.096690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.096728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.101539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.101839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.101877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.106768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.107070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.107107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.111945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.112273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.112312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.117226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.117522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.117558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.122415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.122716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.122761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.127641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.127934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.127965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.132815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.133106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.133149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.138042] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.138359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.138398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.143234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.143535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.143566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.148442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.148756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.148790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.153638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.153947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.153976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.158853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.159172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.159223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.164063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.164396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.164436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.169380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.169699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.169740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.174624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.174961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.174993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.179852] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.180183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.180239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.185166] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.185517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.185553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.190424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.190761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.190790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.195649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.195961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.196001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.200900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.201218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.201265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.206096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.206399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.206438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.211255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.211548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.211585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.216435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.216739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.216773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.221615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.221922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.221981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.226895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.227218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.227258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.232107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.232445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.232481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.237348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.237656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.237692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.242514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.242834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.242870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.247869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.248191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.248246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.253083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.253413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.253453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.258280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.258586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.258614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.526 [2024-07-15 13:26:21.263538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.526 [2024-07-15 13:26:21.263843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.526 [2024-07-15 13:26:21.263877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.785 [2024-07-15 13:26:21.268694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.785 [2024-07-15 13:26:21.268986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.785 [2024-07-15 13:26:21.269022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.273834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.274128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.274158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.278944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.279248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.279283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.284086] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.284401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.284435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.289200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.289518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.289558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.294416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.294723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.294773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.299622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.299937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.299979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.304820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.305135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.305164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.310010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.310329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.310357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.315229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.315544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.315573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.320470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.320787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.320817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.325652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.325967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.325996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.330799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.331107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.331150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.335967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.336276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.336300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.341081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.341396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.341429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.346229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.346518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.346552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.351363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.351655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.351696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.356591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.356899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.356936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.361753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.362069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.362097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.367021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.367346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.367376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.372224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.372528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.372572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.377442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.377752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.377783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.382634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.382951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.382984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.387805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.388112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.388140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.393063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.393386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.393419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.398298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.398629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.398656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.786 [2024-07-15 13:26:21.403459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.786 [2024-07-15 13:26:21.403755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.786 [2024-07-15 13:26:21.403788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.408605] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.408899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.408930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.413702] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.413991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.414031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.418851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.419152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.419185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.424023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.424333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.424366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.429218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.429531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.429570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.434403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.434718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.434767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.439575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.439878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.439913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.444737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.445033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.445058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.449908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.450220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.450250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.455038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.455349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.455393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.460158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.460480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.460524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.465383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.465694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.465739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.470631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.470942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.470972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.475759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.476051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.476083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.480859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.481168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.481200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.486029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.486330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.486360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.491152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.491466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.491491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.496371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.496694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.496730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.501572] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.501872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.501898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.506798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.507117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.507145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.511966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.512302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.512342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.517135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.517454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.517489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.787 [2024-07-15 13:26:21.522308] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:24.787 [2024-07-15 13:26:21.522608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.787 [2024-07-15 13:26:21.522654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.527530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.527835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.527881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.532655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.532960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.533002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.537767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.538059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.538092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.542899] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.543220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.543258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.548012] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.548319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.548343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.553093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.553399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.553427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.558188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.558506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.558537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.563353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.563675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.563711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.568561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.568867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.568900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.573677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.573979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.574025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.578936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.579280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.579308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.584168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.584508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.584543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.589386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.589689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.589730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.594510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.594835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.594862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.599707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.600010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.600040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.604829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.605121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.605154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.609914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.610200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.610242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.614965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.615267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.615292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.620055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.620381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.620408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.625177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.625497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.625538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.630383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.630691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.630721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.635570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.635874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.635916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.640709] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.641018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.641063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.645808] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.646113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.646155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.650940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.651279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.651310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.656143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.656463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.656500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.661301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.047 [2024-07-15 13:26:21.661604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.047 [2024-07-15 13:26:21.661641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.047 [2024-07-15 13:26:21.666483] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.666813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.666856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.671627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.671916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.671944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.676815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.677126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.677165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.681990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.682298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.682322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.687119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.687419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.687444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.692228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.692515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.692550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.697396] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.697686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.697708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.702423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.702711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.702743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.707525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.707816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.707850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.712589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.712876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.712907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.717670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.717958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.717998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.722788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.723082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.723113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.727885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.728176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.728220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.733000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.733302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.733326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.738119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.738423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.738456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.743254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.743546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.743579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.748439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.748732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.748763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.753528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.753819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.753850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.758669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.758983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.759017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.763804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.764106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.764141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.768888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.769176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.769217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.773956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.774259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.774283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.779076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.779377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.779408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.048 [2024-07-15 13:26:21.784159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.048 [2024-07-15 13:26:21.784462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.048 [2024-07-15 13:26:21.784495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.307 [2024-07-15 13:26:21.789313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.307 [2024-07-15 13:26:21.789605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.307 [2024-07-15 13:26:21.789628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.307 [2024-07-15 13:26:21.794389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.307 [2024-07-15 13:26:21.794682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.794710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.799544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.799831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.799863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.804640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.804927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.804951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.809743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.810029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.810067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.814856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.815162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.815198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.819939] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.820243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.820266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.825067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.825372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.825404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.830226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.830527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.830562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.835369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.835660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.835691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.840500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.840799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.840832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.845651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.845955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.845979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.850795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.851107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.851142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.855930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.856248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.856282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.861059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.861372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.861398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.866175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.866495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.866522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.871356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.871665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.871697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.876543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.876853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.876888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.881728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.882033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.882073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.886931] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.887255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.887282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.892118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.892436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.892471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.897221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.897510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.897532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.902390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.902694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.902727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.907457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.907757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.907797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.912534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.912826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.912860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.917619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.917907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.917939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.308 [2024-07-15 13:26:21.922740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11b90) with pdu=0x2000190fef90 00:27:25.308 [2024-07-15 13:26:21.923035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.308 [2024-07-15 13:26:21.923066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.308 00:27:25.308 Latency(us) 00:27:25.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.308 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:25.308 nvme0n1 : 2.00 6014.52 751.81 0.00 0.00 2654.52 2368.23 5540.77 00:27:25.308 =================================================================================================================== 00:27:25.308 Total : 6014.52 751.81 0.00 0.00 2654.52 2368.23 5540.77 00:27:25.308 0 00:27:25.308 13:26:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:25.308 13:26:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:25.308 13:26:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:25.308 | .driver_specific 00:27:25.308 | .nvme_error 00:27:25.308 | .status_code 00:27:25.308 | .command_transient_transport_error' 00:27:25.308 13:26:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:25.566 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 388 > 0 )) 00:27:25.567 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112284 00:27:25.567 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 112284 ']' 00:27:25.567 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 112284 00:27:25.567 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:25.567 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:25.567 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112284 00:27:25.567 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:25.567 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:25.567 killing process with pid 112284 00:27:25.567 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112284' 00:27:25.567 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 112284 00:27:25.567 Received shutdown signal, test time was about 2.000000 seconds 00:27:25.567 00:27:25.567 Latency(us) 00:27:25.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.567 =================================================================================================================== 00:27:25.567 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:25.567 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 112284 00:27:25.825 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 111973 00:27:25.825 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 111973 ']' 00:27:25.825 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 111973 00:27:25.825 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:25.825 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:25.825 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111973 00:27:25.825 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:25.825 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:25.825 killing process with pid 111973 00:27:25.825 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111973' 00:27:25.825 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 111973 00:27:25.825 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 111973 00:27:26.082 00:27:26.082 real 0m18.909s 00:27:26.082 user 0m36.464s 00:27:26.082 sys 0m4.633s 00:27:26.082 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:26.082 ************************************ 00:27:26.082 END TEST nvmf_digest_error 00:27:26.082 ************************************ 00:27:26.082 13:26:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:26.082 13:26:22 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:26.082 13:26:22 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:26.082 13:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:26.082 13:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:27:26.082 13:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:26.082 13:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:27:26.082 13:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:26.082 13:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:26.082 rmmod nvme_tcp 00:27:26.349 rmmod nvme_fabrics 00:27:26.349 rmmod nvme_keyring 00:27:26.349 13:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:26.349 13:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:27:26.349 13:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:27:26.349 13:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 111973 ']' 00:27:26.349 13:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 111973 00:27:26.349 13:26:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 111973 ']' 00:27:26.349 13:26:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 111973 00:27:26.349 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (111973) - No such process 00:27:26.349 Process with pid 111973 is not found 00:27:26.349 13:26:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 111973 is not found' 00:27:26.349 13:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:26.349 13:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:26.349 13:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:26.349 13:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:26.349 13:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:26.349 13:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.349 13:26:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:26.349 13:26:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.349 13:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:26.349 00:27:26.349 real 0m38.030s 00:27:26.349 user 1m11.801s 00:27:26.349 sys 0m9.645s 00:27:26.349 13:26:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:26.349 13:26:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:26.349 ************************************ 00:27:26.349 END TEST nvmf_digest 00:27:26.349 ************************************ 00:27:26.349 13:26:22 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 1 -eq 1 ]] 00:27:26.349 13:26:22 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 00:27:26.349 13:26:22 nvmf_tcp -- nvmf/nvmf.sh@113 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:27:26.349 13:26:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:26.349 13:26:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:26.349 13:26:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:26.349 ************************************ 00:27:26.349 START TEST nvmf_mdns_discovery 00:27:26.349 ************************************ 00:27:26.349 13:26:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:27:26.350 * Looking for test storage... 00:27:26.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:26.350 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:26.628 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:26.628 Cannot find device "nvmf_tgt_br" 00:27:26.628 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:27:26.628 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:26.628 Cannot find device "nvmf_tgt_br2" 00:27:26.628 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:27:26.628 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:26.628 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:26.628 Cannot find device "nvmf_tgt_br" 00:27:26.628 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:26.629 Cannot find device "nvmf_tgt_br2" 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:26.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:26.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:26.629 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:26.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:27:26.887 00:27:26.887 --- 10.0.0.2 ping statistics --- 00:27:26.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.887 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:26.887 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:26.887 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:27:26.887 00:27:26.887 --- 10.0.0.3 ping statistics --- 00:27:26.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.887 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:26.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:27:26.887 00:27:26.887 --- 10.0.0.1 ping statistics --- 00:27:26.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.887 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=112571 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 112571 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@827 -- # '[' -z 112571 ']' 00:27:26.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:26.887 13:26:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.887 [2024-07-15 13:26:23.508617] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:26.887 [2024-07-15 13:26:23.508720] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:27.143 [2024-07-15 13:26:23.642138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.143 [2024-07-15 13:26:23.742541] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:27.143 [2024-07-15 13:26:23.742597] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:27.143 [2024-07-15 13:26:23.742610] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:27.143 [2024-07-15 13:26:23.742618] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:27.143 [2024-07-15 13:26:23.742625] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:27.143 [2024-07-15 13:26:23.742655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # return 0 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.075 [2024-07-15 13:26:24.688633] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.075 [2024-07-15 13:26:24.696753] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.075 null0 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.075 null1 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.075 null2 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.075 null3 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.075 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=112621 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 112621 /tmp/host.sock 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@827 -- # '[' -z 112621 ']' 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:28.075 13:26:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.075 [2024-07-15 13:26:24.806413] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:28.075 [2024-07-15 13:26:24.806536] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112621 ] 00:27:28.333 [2024-07-15 13:26:24.951497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.333 [2024-07-15 13:26:25.063588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.266 13:26:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:29.266 13:26:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # return 0 00:27:29.266 13:26:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:27:29.266 13:26:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:27:29.266 13:26:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:27:29.266 13:26:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=112650 00:27:29.266 13:26:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:27:29.266 13:26:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:27:29.266 13:26:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:27:29.266 Process 986 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:27:29.266 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:27:29.266 Successfully dropped root privileges. 00:27:29.266 avahi-daemon 0.8 starting up. 00:27:29.266 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:27:30.198 Successfully called chroot(). 00:27:30.198 Successfully dropped remaining capabilities. 00:27:30.198 No service file found in /etc/avahi/services. 00:27:30.198 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:27:30.198 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:27:30.198 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:27:30.198 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:27:30.198 Network interface enumeration completed. 00:27:30.198 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:27:30.198 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:27:30.198 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:27:30.198 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:27:30.198 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 1860447134. 00:27:30.455 13:26:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:30.455 13:26:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.456 13:26:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.456 13:26:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.456 13:26:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:30.456 13:26:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.456 13:26:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.456 13:26:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.456 13:26:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:27:30.456 13:26:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:27:30.456 13:26:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:30.456 13:26:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:30.456 13:26:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.456 13:26:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.456 13:26:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:30.456 13:26:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:30.456 13:26:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:30.456 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.714 [2024-07-15 13:26:27.257761] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.714 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.715 [2024-07-15 13:26:27.329429] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.715 [2024-07-15 13:26:27.369415] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.715 [2024-07-15 13:26:27.377380] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.715 13:26:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:27:31.648 [2024-07-15 13:26:28.157765] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:27:32.213 [2024-07-15 13:26:28.757788] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:27:32.213 [2024-07-15 13:26:28.757840] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:27:32.213 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:32.213 cookie is 0 00:27:32.213 is_local: 1 00:27:32.213 our_own: 0 00:27:32.213 wide_area: 0 00:27:32.213 multicast: 1 00:27:32.213 cached: 1 00:27:32.213 [2024-07-15 13:26:28.857774] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:27:32.213 [2024-07-15 13:26:28.857825] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:27:32.213 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:32.213 cookie is 0 00:27:32.213 is_local: 1 00:27:32.213 our_own: 0 00:27:32.213 wide_area: 0 00:27:32.213 multicast: 1 00:27:32.213 cached: 1 00:27:32.213 [2024-07-15 13:26:28.857854] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:27:32.471 [2024-07-15 13:26:28.957776] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:27:32.471 [2024-07-15 13:26:28.957819] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:27:32.471 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:32.471 cookie is 0 00:27:32.471 is_local: 1 00:27:32.471 our_own: 0 00:27:32.471 wide_area: 0 00:27:32.471 multicast: 1 00:27:32.471 cached: 1 00:27:32.471 [2024-07-15 13:26:29.057775] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:27:32.471 [2024-07-15 13:26:29.057820] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:27:32.471 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:32.471 cookie is 0 00:27:32.471 is_local: 1 00:27:32.471 our_own: 0 00:27:32.471 wide_area: 0 00:27:32.471 multicast: 1 00:27:32.471 cached: 1 00:27:32.471 [2024-07-15 13:26:29.057843] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:27:33.037 [2024-07-15 13:26:29.763076] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:33.037 [2024-07-15 13:26:29.763118] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:33.037 [2024-07-15 13:26:29.763137] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:33.295 [2024-07-15 13:26:29.850244] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:27:33.295 [2024-07-15 13:26:29.914669] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:33.295 [2024-07-15 13:26:29.914720] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:33.295 [2024-07-15 13:26:29.962981] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:33.295 [2024-07-15 13:26:29.963036] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:33.295 [2024-07-15 13:26:29.963056] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:33.553 [2024-07-15 13:26:30.049164] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:27:33.553 [2024-07-15 13:26:30.104561] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:33.553 [2024-07-15 13:26:30.104608] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.108 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.109 13:26:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.480 [2024-07-15 13:26:33.908570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:37.480 [2024-07-15 13:26:33.909579] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:37.480 [2024-07-15 13:26:33.909628] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:37.480 [2024-07-15 13:26:33.909676] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:37.480 [2024-07-15 13:26:33.909692] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.480 [2024-07-15 13:26:33.916467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:27:37.480 [2024-07-15 13:26:33.917597] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:37.480 [2024-07-15 13:26:33.917668] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.480 13:26:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:27:37.480 [2024-07-15 13:26:34.048695] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:27:37.480 [2024-07-15 13:26:34.048976] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:27:37.480 [2024-07-15 13:26:34.106227] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:37.480 [2024-07-15 13:26:34.106282] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:37.480 [2024-07-15 13:26:34.106290] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:37.480 [2024-07-15 13:26:34.106319] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:37.481 [2024-07-15 13:26:34.107033] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:37.481 [2024-07-15 13:26:34.107058] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:37.481 [2024-07-15 13:26:34.107065] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:37.481 [2024-07-15 13:26:34.107085] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:37.481 [2024-07-15 13:26:34.151810] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:37.481 [2024-07-15 13:26:34.151864] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:37.481 [2024-07-15 13:26:34.152782] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:37.481 [2024-07-15 13:26:34.152804] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:38.414 13:26:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:27:38.414 13:26:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:38.414 13:26:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.414 13:26:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:38.414 13:26:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:38.414 13:26:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:38.414 13:26:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:38.414 13:26:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.414 13:26:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:38.414 13:26:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:27:38.414 13:26:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:38.414 13:26:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:38.414 13:26:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.414 13:26:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:38.414 13:26:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:38.414 13:26:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:38.414 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.414 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:38.414 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:27:38.414 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:38.414 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:38.414 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.414 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:38.414 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:38.414 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:38.414 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.414 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:38.414 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:27:38.414 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:38.414 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:38.414 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:38.414 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.414 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:38.414 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:38.414 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.674 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:38.674 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:27:38.674 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:38.674 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:38.674 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.674 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:38.674 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.674 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:27:38.674 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:27:38.674 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:27:38.674 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:38.674 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.674 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:38.674 [2024-07-15 13:26:35.246290] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:38.674 [2024-07-15 13:26:35.246341] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:38.674 [2024-07-15 13:26:35.246380] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:38.674 [2024-07-15 13:26:35.246394] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:38.674 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.674 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:27:38.674 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.674 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:38.674 [2024-07-15 13:26:35.253299] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:38.674 [2024-07-15 13:26:35.253368] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:38.674 [2024-07-15 13:26:35.254663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.674 [2024-07-15 13:26:35.254705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.674 [2024-07-15 13:26:35.254720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.674 [2024-07-15 13:26:35.254729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.674 [2024-07-15 13:26:35.254740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.674 [2024-07-15 13:26:35.254759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.674 [2024-07-15 13:26:35.254770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.674 [2024-07-15 13:26:35.254779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.674 [2024-07-15 13:26:35.254789] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2182640 is same with the state(5) to be set 00:27:38.674 [2024-07-15 13:26:35.254857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.674 [2024-07-15 13:26:35.254872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.674 [2024-07-15 13:26:35.254882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.674 [2024-07-15 13:26:35.254892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.674 [2024-07-15 13:26:35.254902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.674 [2024-07-15 13:26:35.254911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.674 [2024-07-15 13:26:35.254921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.674 [2024-07-15 13:26:35.254930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.674 [2024-07-15 13:26:35.254939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6560 is same with the state(5) to be set 00:27:38.674 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.674 13:26:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:27:38.674 [2024-07-15 13:26:35.264620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2182640 (9): Bad file descriptor 00:27:38.674 [2024-07-15 13:26:35.264685] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6560 (9): Bad file descriptor 00:27:38.674 [2024-07-15 13:26:35.274663] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:38.674 [2024-07-15 13:26:35.274740] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:38.674 [2024-07-15 13:26:35.274902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.674 [2024-07-15 13:26:35.274928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2182640 with addr=10.0.0.3, port=4420 00:27:38.674 [2024-07-15 13:26:35.274942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2182640 is same with the state(5) to be set 00:27:38.674 [2024-07-15 13:26:35.274990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.674 [2024-07-15 13:26:35.275006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6560 with addr=10.0.0.2, port=4420 00:27:38.674 [2024-07-15 13:26:35.275016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6560 is same with the state(5) to be set 00:27:38.674 [2024-07-15 13:26:35.275034] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2182640 (9): Bad file descriptor 00:27:38.674 [2024-07-15 13:26:35.275048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6560 (9): Bad file descriptor 00:27:38.674 [2024-07-15 13:26:35.275061] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:38.674 [2024-07-15 13:26:35.275071] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:38.674 [2024-07-15 13:26:35.275082] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:38.674 [2024-07-15 13:26:35.275096] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:38.674 [2024-07-15 13:26:35.275104] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:38.674 [2024-07-15 13:26:35.275113] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:38.674 [2024-07-15 13:26:35.275126] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.674 [2024-07-15 13:26:35.275136] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.674 [2024-07-15 13:26:35.284808] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:38.674 [2024-07-15 13:26:35.284883] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:38.674 [2024-07-15 13:26:35.284980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.674 [2024-07-15 13:26:35.285002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6560 with addr=10.0.0.2, port=4420 00:27:38.674 [2024-07-15 13:26:35.285013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6560 is same with the state(5) to be set 00:27:38.674 [2024-07-15 13:26:35.285060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.674 [2024-07-15 13:26:35.285076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2182640 with addr=10.0.0.3, port=4420 00:27:38.674 [2024-07-15 13:26:35.285085] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2182640 is same with the state(5) to be set 00:27:38.674 [2024-07-15 13:26:35.285100] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6560 (9): Bad file descriptor 00:27:38.674 [2024-07-15 13:26:35.285117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2182640 (9): Bad file descriptor 00:27:38.674 [2024-07-15 13:26:35.285128] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:38.674 [2024-07-15 13:26:35.285137] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:38.674 [2024-07-15 13:26:35.285147] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:38.674 [2024-07-15 13:26:35.285162] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.674 [2024-07-15 13:26:35.285171] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:38.674 [2024-07-15 13:26:35.285180] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:38.674 [2024-07-15 13:26:35.285189] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:38.674 [2024-07-15 13:26:35.285202] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.674 [2024-07-15 13:26:35.294901] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:38.674 [2024-07-15 13:26:35.295039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.674 [2024-07-15 13:26:35.295061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6560 with addr=10.0.0.2, port=4420 00:27:38.674 [2024-07-15 13:26:35.295073] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6560 is same with the state(5) to be set 00:27:38.675 [2024-07-15 13:26:35.295104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6560 (9): Bad file descriptor 00:27:38.675 [2024-07-15 13:26:35.295123] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:38.675 [2024-07-15 13:26:35.295142] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:38.675 [2024-07-15 13:26:35.295152] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:38.675 [2024-07-15 13:26:35.295169] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:38.675 [2024-07-15 13:26:35.295182] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.675 [2024-07-15 13:26:35.295243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.675 [2024-07-15 13:26:35.295261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2182640 with addr=10.0.0.3, port=4420 00:27:38.675 [2024-07-15 13:26:35.295271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2182640 is same with the state(5) to be set 00:27:38.675 [2024-07-15 13:26:35.295287] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2182640 (9): Bad file descriptor 00:27:38.675 [2024-07-15 13:26:35.295319] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:38.675 [2024-07-15 13:26:35.295330] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:38.675 [2024-07-15 13:26:35.295339] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:38.675 [2024-07-15 13:26:35.295353] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.675 [2024-07-15 13:26:35.304986] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:38.675 [2024-07-15 13:26:35.305136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.675 [2024-07-15 13:26:35.305160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6560 with addr=10.0.0.2, port=4420 00:27:38.675 [2024-07-15 13:26:35.305172] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6560 is same with the state(5) to be set 00:27:38.675 [2024-07-15 13:26:35.305191] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6560 (9): Bad file descriptor 00:27:38.675 [2024-07-15 13:26:35.305229] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:38.675 [2024-07-15 13:26:35.305242] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:38.675 [2024-07-15 13:26:35.305253] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:38.675 [2024-07-15 13:26:35.305267] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:38.675 [2024-07-15 13:26:35.305278] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.675 [2024-07-15 13:26:35.305337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.675 [2024-07-15 13:26:35.305355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2182640 with addr=10.0.0.3, port=4420 00:27:38.675 [2024-07-15 13:26:35.305365] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2182640 is same with the state(5) to be set 00:27:38.675 [2024-07-15 13:26:35.305398] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2182640 (9): Bad file descriptor 00:27:38.675 [2024-07-15 13:26:35.305413] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:38.675 [2024-07-15 13:26:35.305423] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:38.675 [2024-07-15 13:26:35.305432] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:38.675 [2024-07-15 13:26:35.305445] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.675 [2024-07-15 13:26:35.315076] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:38.675 [2024-07-15 13:26:35.315221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.675 [2024-07-15 13:26:35.315243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6560 with addr=10.0.0.2, port=4420 00:27:38.675 [2024-07-15 13:26:35.315255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6560 is same with the state(5) to be set 00:27:38.675 [2024-07-15 13:26:35.315273] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6560 (9): Bad file descriptor 00:27:38.675 [2024-07-15 13:26:35.315291] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:38.675 [2024-07-15 13:26:35.315300] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:38.675 [2024-07-15 13:26:35.315321] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:38.675 [2024-07-15 13:26:35.315336] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.675 [2024-07-15 13:26:35.315377] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:38.675 [2024-07-15 13:26:35.315440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.675 [2024-07-15 13:26:35.315459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2182640 with addr=10.0.0.3, port=4420 00:27:38.675 [2024-07-15 13:26:35.315468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2182640 is same with the state(5) to be set 00:27:38.675 [2024-07-15 13:26:35.315483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2182640 (9): Bad file descriptor 00:27:38.675 [2024-07-15 13:26:35.315497] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:38.675 [2024-07-15 13:26:35.315506] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:38.675 [2024-07-15 13:26:35.315515] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:38.675 [2024-07-15 13:26:35.315528] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.675 [2024-07-15 13:26:35.325160] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:38.675 [2024-07-15 13:26:35.325310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.675 [2024-07-15 13:26:35.325333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6560 with addr=10.0.0.2, port=4420 00:27:38.675 [2024-07-15 13:26:35.325344] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6560 is same with the state(5) to be set 00:27:38.675 [2024-07-15 13:26:35.325363] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6560 (9): Bad file descriptor 00:27:38.675 [2024-07-15 13:26:35.325397] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:38.675 [2024-07-15 13:26:35.325411] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:38.675 [2024-07-15 13:26:35.325422] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:38.675 [2024-07-15 13:26:35.325437] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.675 [2024-07-15 13:26:35.325460] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:38.675 [2024-07-15 13:26:35.325519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.675 [2024-07-15 13:26:35.325543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2182640 with addr=10.0.0.3, port=4420 00:27:38.675 [2024-07-15 13:26:35.325553] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2182640 is same with the state(5) to be set 00:27:38.675 [2024-07-15 13:26:35.325569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2182640 (9): Bad file descriptor 00:27:38.675 [2024-07-15 13:26:35.325583] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:38.675 [2024-07-15 13:26:35.325592] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:38.675 [2024-07-15 13:26:35.325601] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:38.675 [2024-07-15 13:26:35.325615] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.675 [2024-07-15 13:26:35.335248] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:38.675 [2024-07-15 13:26:35.335377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.675 [2024-07-15 13:26:35.335398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6560 with addr=10.0.0.2, port=4420 00:27:38.675 [2024-07-15 13:26:35.335410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6560 is same with the state(5) to be set 00:27:38.675 [2024-07-15 13:26:35.335427] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6560 (9): Bad file descriptor 00:27:38.675 [2024-07-15 13:26:35.335460] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:38.675 [2024-07-15 13:26:35.335472] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:38.675 [2024-07-15 13:26:35.335482] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:38.675 [2024-07-15 13:26:35.335499] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.675 [2024-07-15 13:26:35.335523] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:38.675 [2024-07-15 13:26:35.335583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.675 [2024-07-15 13:26:35.335602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2182640 with addr=10.0.0.3, port=4420 00:27:38.675 [2024-07-15 13:26:35.335612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2182640 is same with the state(5) to be set 00:27:38.675 [2024-07-15 13:26:35.335627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2182640 (9): Bad file descriptor 00:27:38.675 [2024-07-15 13:26:35.335652] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:38.675 [2024-07-15 13:26:35.335663] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:38.675 [2024-07-15 13:26:35.335672] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:38.675 [2024-07-15 13:26:35.335686] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.675 [2024-07-15 13:26:35.345331] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:38.675 [2024-07-15 13:26:35.345467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.675 [2024-07-15 13:26:35.345489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6560 with addr=10.0.0.2, port=4420 00:27:38.675 [2024-07-15 13:26:35.345500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6560 is same with the state(5) to be set 00:27:38.675 [2024-07-15 13:26:35.345519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6560 (9): Bad file descriptor 00:27:38.675 [2024-07-15 13:26:35.345552] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:38.675 [2024-07-15 13:26:35.345563] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:38.675 [2024-07-15 13:26:35.345574] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:38.675 [2024-07-15 13:26:35.345595] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.675 [2024-07-15 13:26:35.345611] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:38.675 [2024-07-15 13:26:35.345670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.676 [2024-07-15 13:26:35.345688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2182640 with addr=10.0.0.3, port=4420 00:27:38.676 [2024-07-15 13:26:35.345698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2182640 is same with the state(5) to be set 00:27:38.676 [2024-07-15 13:26:35.345713] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2182640 (9): Bad file descriptor 00:27:38.676 [2024-07-15 13:26:35.345727] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:38.676 [2024-07-15 13:26:35.345735] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:38.676 [2024-07-15 13:26:35.345745] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:38.676 [2024-07-15 13:26:35.345758] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.676 [2024-07-15 13:26:35.355420] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:38.676 [2024-07-15 13:26:35.355558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.676 [2024-07-15 13:26:35.355581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6560 with addr=10.0.0.2, port=4420 00:27:38.676 [2024-07-15 13:26:35.355592] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6560 is same with the state(5) to be set 00:27:38.676 [2024-07-15 13:26:35.355610] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6560 (9): Bad file descriptor 00:27:38.676 [2024-07-15 13:26:35.355643] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:38.676 [2024-07-15 13:26:35.355654] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:38.676 [2024-07-15 13:26:35.355664] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:38.676 [2024-07-15 13:26:35.355696] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.676 [2024-07-15 13:26:35.355712] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:38.676 [2024-07-15 13:26:35.355777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.676 [2024-07-15 13:26:35.355795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2182640 with addr=10.0.0.3, port=4420 00:27:38.676 [2024-07-15 13:26:35.355805] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2182640 is same with the state(5) to be set 00:27:38.676 [2024-07-15 13:26:35.355820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2182640 (9): Bad file descriptor 00:27:38.676 [2024-07-15 13:26:35.355834] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:38.676 [2024-07-15 13:26:35.355843] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:38.676 [2024-07-15 13:26:35.355852] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:38.676 [2024-07-15 13:26:35.355865] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.676 [2024-07-15 13:26:35.365507] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:38.676 [2024-07-15 13:26:35.365657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.676 [2024-07-15 13:26:35.365680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6560 with addr=10.0.0.2, port=4420 00:27:38.676 [2024-07-15 13:26:35.365693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6560 is same with the state(5) to be set 00:27:38.676 [2024-07-15 13:26:35.365711] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6560 (9): Bad file descriptor 00:27:38.676 [2024-07-15 13:26:35.365745] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:38.676 [2024-07-15 13:26:35.365756] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:38.676 [2024-07-15 13:26:35.365766] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:38.676 [2024-07-15 13:26:35.365789] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.676 [2024-07-15 13:26:35.365804] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:38.676 [2024-07-15 13:26:35.365865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.676 [2024-07-15 13:26:35.365883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2182640 with addr=10.0.0.3, port=4420 00:27:38.676 [2024-07-15 13:26:35.365894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2182640 is same with the state(5) to be set 00:27:38.676 [2024-07-15 13:26:35.365909] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2182640 (9): Bad file descriptor 00:27:38.676 [2024-07-15 13:26:35.365923] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:38.676 [2024-07-15 13:26:35.365932] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:38.676 [2024-07-15 13:26:35.365940] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:38.676 [2024-07-15 13:26:35.365954] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.676 [2024-07-15 13:26:35.375605] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:38.676 [2024-07-15 13:26:35.375763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.676 [2024-07-15 13:26:35.375786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6560 with addr=10.0.0.2, port=4420 00:27:38.676 [2024-07-15 13:26:35.375798] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6560 is same with the state(5) to be set 00:27:38.676 [2024-07-15 13:26:35.375830] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6560 (9): Bad file descriptor 00:27:38.676 [2024-07-15 13:26:35.375880] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:38.676 [2024-07-15 13:26:35.375893] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:38.676 [2024-07-15 13:26:35.375905] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:38.676 [2024-07-15 13:26:35.375919] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:38.676 [2024-07-15 13:26:35.375931] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.676 [2024-07-15 13:26:35.375997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.676 [2024-07-15 13:26:35.376015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2182640 with addr=10.0.0.3, port=4420 00:27:38.676 [2024-07-15 13:26:35.376025] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2182640 is same with the state(5) to be set 00:27:38.676 [2024-07-15 13:26:35.376041] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2182640 (9): Bad file descriptor 00:27:38.676 [2024-07-15 13:26:35.376055] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:38.676 [2024-07-15 13:26:35.376064] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:38.676 [2024-07-15 13:26:35.376073] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:38.676 [2024-07-15 13:26:35.376087] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.676 [2024-07-15 13:26:35.384882] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:27:38.676 [2024-07-15 13:26:35.384929] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:38.676 [2024-07-15 13:26:35.384970] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:38.676 [2024-07-15 13:26:35.385008] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:38.676 [2024-07-15 13:26:35.385024] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:38.676 [2024-07-15 13:26:35.385038] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:38.934 [2024-07-15 13:26:35.470979] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:38.934 [2024-07-15 13:26:35.471075] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.867 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:39.868 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.868 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:27:39.868 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:27:39.868 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:39.868 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.868 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.868 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:39.868 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.868 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:27:39.868 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:27:39.868 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:27:39.868 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:27:39.868 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.868 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.868 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.868 13:26:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:27:40.126 [2024-07-15 13:26:36.657796] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.082 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:41.340 [2024-07-15 13:26:37.824360] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:27:41.340 2024/07/15 13:26:37 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:27:41.340 request: 00:27:41.340 { 00:27:41.340 "method": "bdev_nvme_start_mdns_discovery", 00:27:41.340 "params": { 00:27:41.340 "name": "mdns", 00:27:41.340 "svcname": "_nvme-disc._http", 00:27:41.340 "hostnqn": "nqn.2021-12.io.spdk:test" 00:27:41.340 } 00:27:41.340 } 00:27:41.340 Got JSON-RPC error response 00:27:41.340 GoRPCClient: error on JSON-RPC call 00:27:41.340 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:41.340 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:41.340 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:41.340 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:41.340 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:41.340 13:26:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:27:41.905 [2024-07-15 13:26:38.412935] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:27:41.905 [2024-07-15 13:26:38.512929] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:27:41.905 [2024-07-15 13:26:38.612936] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:27:41.905 [2024-07-15 13:26:38.612992] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:27:41.905 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:41.905 cookie is 0 00:27:41.905 is_local: 1 00:27:41.905 our_own: 0 00:27:41.905 wide_area: 0 00:27:41.905 multicast: 1 00:27:41.905 cached: 1 00:27:42.163 [2024-07-15 13:26:38.712941] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:27:42.163 [2024-07-15 13:26:38.712998] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:27:42.163 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:42.163 cookie is 0 00:27:42.163 is_local: 1 00:27:42.163 our_own: 0 00:27:42.163 wide_area: 0 00:27:42.163 multicast: 1 00:27:42.163 cached: 1 00:27:42.163 [2024-07-15 13:26:38.713015] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:27:42.163 [2024-07-15 13:26:38.812943] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:27:42.163 [2024-07-15 13:26:38.813002] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:27:42.163 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:42.163 cookie is 0 00:27:42.163 is_local: 1 00:27:42.163 our_own: 0 00:27:42.163 wide_area: 0 00:27:42.163 multicast: 1 00:27:42.163 cached: 1 00:27:42.420 [2024-07-15 13:26:38.912945] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:27:42.420 [2024-07-15 13:26:38.913002] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:27:42.420 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:42.420 cookie is 0 00:27:42.420 is_local: 1 00:27:42.420 our_own: 0 00:27:42.420 wide_area: 0 00:27:42.420 multicast: 1 00:27:42.420 cached: 1 00:27:42.420 [2024-07-15 13:26:38.913019] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:27:42.985 [2024-07-15 13:26:39.625547] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:42.985 [2024-07-15 13:26:39.625613] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:42.985 [2024-07-15 13:26:39.625642] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:42.985 [2024-07-15 13:26:39.711731] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:27:43.243 [2024-07-15 13:26:39.771401] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:43.243 [2024-07-15 13:26:39.771458] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:43.243 [2024-07-15 13:26:39.825316] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:43.243 [2024-07-15 13:26:39.825370] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:43.243 [2024-07-15 13:26:39.825393] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:43.243 [2024-07-15 13:26:39.911496] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:27:43.243 [2024-07-15 13:26:39.971171] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:43.243 [2024-07-15 13:26:39.971257] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:46.561 13:26:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:46.561 [2024-07-15 13:26:43.018237] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:27:46.561 2024/07/15 13:26:43 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:27:46.561 request: 00:27:46.561 { 00:27:46.561 "method": "bdev_nvme_start_mdns_discovery", 00:27:46.561 "params": { 00:27:46.561 "name": "cdc", 00:27:46.561 "svcname": "_nvme-disc._tcp", 00:27:46.561 "hostnqn": "nqn.2021-12.io.spdk:test" 00:27:46.561 } 00:27:46.561 } 00:27:46.561 Got JSON-RPC error response 00:27:46.561 GoRPCClient: error on JSON-RPC call 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 112621 00:27:46.561 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 112621 00:27:46.561 [2024-07-15 13:26:43.254741] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 112650 00:27:46.825 Got SIGTERM, quitting. 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:27:46.825 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:27:46.825 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:27:46.825 avahi-daemon 0.8 exiting. 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:46.825 rmmod nvme_tcp 00:27:46.825 rmmod nvme_fabrics 00:27:46.825 rmmod nvme_keyring 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 112571 ']' 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 112571 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@946 -- # '[' -z 112571 ']' 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@950 -- # kill -0 112571 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@951 -- # uname 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112571 00:27:46.825 killing process with pid 112571 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112571' 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@965 -- # kill 112571 00:27:46.825 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@970 -- # wait 112571 00:27:47.082 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:47.082 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:47.082 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:47.082 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:47.082 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:47.082 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.082 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:47.082 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.082 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:47.082 ************************************ 00:27:47.082 END TEST nvmf_mdns_discovery 00:27:47.082 ************************************ 00:27:47.082 00:27:47.082 real 0m20.811s 00:27:47.082 user 0m40.724s 00:27:47.082 sys 0m2.055s 00:27:47.082 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:47.082 13:26:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:47.082 13:26:43 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:27:47.082 13:26:43 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:47.082 13:26:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:47.082 13:26:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:47.082 13:26:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:47.082 ************************************ 00:27:47.082 START TEST nvmf_host_multipath 00:27:47.082 ************************************ 00:27:47.082 13:26:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:47.340 * Looking for test storage... 00:27:47.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:47.340 Cannot find device "nvmf_tgt_br" 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:47.340 Cannot find device "nvmf_tgt_br2" 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:47.340 Cannot find device "nvmf_tgt_br" 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:47.340 Cannot find device "nvmf_tgt_br2" 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:27:47.340 13:26:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:47.340 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:47.340 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:47.340 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:47.340 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:27:47.340 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:47.340 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:47.340 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:27:47.340 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:47.340 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:47.340 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:47.340 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:47.340 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:47.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:27:47.599 00:27:47.599 --- 10.0.0.2 ping statistics --- 00:27:47.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.599 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:47.599 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:47.599 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:27:47.599 00:27:47.599 --- 10.0.0.3 ping statistics --- 00:27:47.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.599 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:47.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:27:47.599 00:27:47.599 --- 10.0.0.1 ping statistics --- 00:27:47.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.599 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:47.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=113203 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 113203 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 113203 ']' 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.599 13:26:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:47.600 13:26:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.600 13:26:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:47.600 13:26:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:47.600 [2024-07-15 13:26:44.303447] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:47.600 [2024-07-15 13:26:44.303795] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.858 [2024-07-15 13:26:44.443342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:47.858 [2024-07-15 13:26:44.581980] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.858 [2024-07-15 13:26:44.582350] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.858 [2024-07-15 13:26:44.582486] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.858 [2024-07-15 13:26:44.582607] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.858 [2024-07-15 13:26:44.582649] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.858 [2024-07-15 13:26:44.582943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.858 [2024-07-15 13:26:44.582957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.792 13:26:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:48.792 13:26:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:27:48.792 13:26:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:48.792 13:26:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:48.792 13:26:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:48.792 13:26:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:48.792 13:26:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=113203 00:27:48.792 13:26:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:49.050 [2024-07-15 13:26:45.585473] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.050 13:26:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:49.308 Malloc0 00:27:49.308 13:26:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:49.565 13:26:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:49.824 13:26:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:50.082 [2024-07-15 13:26:46.806610] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.340 13:26:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:50.599 [2024-07-15 13:26:47.134940] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:50.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:50.599 13:26:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=113301 00:27:50.599 13:26:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:50.599 13:26:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:50.599 13:26:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 113301 /var/tmp/bdevperf.sock 00:27:50.599 13:26:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 113301 ']' 00:27:50.599 13:26:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:50.599 13:26:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:50.599 13:26:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:50.599 13:26:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:50.599 13:26:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:50.857 13:26:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:50.857 13:26:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:27:50.857 13:26:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:51.425 13:26:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:51.683 Nvme0n1 00:27:51.683 13:26:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:51.996 Nvme0n1 00:27:52.254 13:26:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:27:52.254 13:26:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:53.187 13:26:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:27:53.187 13:26:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:53.445 13:26:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:53.703 13:26:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:27:53.703 13:26:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113380 00:27:53.703 13:26:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113203 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:53.703 13:26:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:00.260 13:26:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:00.260 13:26:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:00.260 13:26:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:00.260 13:26:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:00.260 Attaching 4 probes... 00:28:00.260 @path[10.0.0.2, 4421]: 15257 00:28:00.260 @path[10.0.0.2, 4421]: 15438 00:28:00.260 @path[10.0.0.2, 4421]: 15209 00:28:00.260 @path[10.0.0.2, 4421]: 15348 00:28:00.260 @path[10.0.0.2, 4421]: 15133 00:28:00.260 13:26:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:00.260 13:26:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:00.260 13:26:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:00.260 13:26:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:00.260 13:26:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:00.260 13:26:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:00.260 13:26:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113380 00:28:00.260 13:26:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:00.260 13:26:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:28:00.260 13:26:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:00.260 13:26:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:00.518 13:26:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:28:00.518 13:26:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113512 00:28:00.518 13:26:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113203 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:00.518 13:26:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:07.113 13:27:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:07.113 13:27:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:28:07.113 13:27:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:28:07.113 13:27:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:07.113 Attaching 4 probes... 00:28:07.113 @path[10.0.0.2, 4420]: 16172 00:28:07.113 @path[10.0.0.2, 4420]: 16361 00:28:07.113 @path[10.0.0.2, 4420]: 16285 00:28:07.113 @path[10.0.0.2, 4420]: 16122 00:28:07.113 @path[10.0.0.2, 4420]: 16223 00:28:07.113 13:27:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:07.113 13:27:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:07.113 13:27:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:07.113 13:27:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:28:07.113 13:27:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:28:07.113 13:27:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:28:07.113 13:27:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113512 00:28:07.113 13:27:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:07.113 13:27:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:28:07.113 13:27:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:07.113 13:27:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:07.371 13:27:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:28:07.371 13:27:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113643 00:28:07.371 13:27:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:07.371 13:27:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113203 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:13.927 13:27:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:13.927 13:27:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:13.927 13:27:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:13.927 13:27:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:13.927 Attaching 4 probes... 00:28:13.927 @path[10.0.0.2, 4421]: 12896 00:28:13.927 @path[10.0.0.2, 4421]: 15299 00:28:13.927 @path[10.0.0.2, 4421]: 15165 00:28:13.927 @path[10.0.0.2, 4421]: 15589 00:28:13.927 @path[10.0.0.2, 4421]: 15295 00:28:13.927 13:27:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:13.927 13:27:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:13.927 13:27:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:13.927 13:27:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:13.927 13:27:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:13.927 13:27:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:13.927 13:27:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113643 00:28:13.927 13:27:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:13.927 13:27:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:28:13.927 13:27:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:13.927 13:27:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:14.185 13:27:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:28:14.185 13:27:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113774 00:28:14.185 13:27:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113203 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:14.185 13:27:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:20.740 13:27:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:20.740 13:27:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:28:20.740 13:27:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:28:20.740 13:27:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:20.740 Attaching 4 probes... 00:28:20.740 00:28:20.740 00:28:20.740 00:28:20.740 00:28:20.740 00:28:20.740 13:27:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:20.740 13:27:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:20.740 13:27:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:20.740 13:27:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:28:20.740 13:27:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:28:20.740 13:27:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:28:20.740 13:27:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113774 00:28:20.740 13:27:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:20.740 13:27:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:28:20.740 13:27:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:20.740 13:27:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:20.997 13:27:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:28:20.997 13:27:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113203 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:20.997 13:27:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113900 00:28:20.997 13:27:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:27.577 13:27:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:27.577 13:27:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:27.577 13:27:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:27.577 13:27:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:27.577 Attaching 4 probes... 00:28:27.577 @path[10.0.0.2, 4421]: 14881 00:28:27.577 @path[10.0.0.2, 4421]: 15182 00:28:27.577 @path[10.0.0.2, 4421]: 13985 00:28:27.577 @path[10.0.0.2, 4421]: 15343 00:28:27.577 @path[10.0.0.2, 4421]: 15234 00:28:27.577 13:27:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:27.577 13:27:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:27.577 13:27:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:27.577 13:27:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:27.577 13:27:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:27.577 13:27:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:27.577 13:27:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113900 00:28:27.577 13:27:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:27.577 13:27:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:27.577 [2024-07-15 13:27:24.153665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153964] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.153998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.154007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.154015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.154023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.154031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.154043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.154052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.154061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.154070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.154079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.154088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.154096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.154106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.154114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.154123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.154132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.154141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.154150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.154158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.577 [2024-07-15 13:27:24.154166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 [2024-07-15 13:27:24.154426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4360 is same with the state(5) to be set 00:28:27.578 13:27:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:28:28.511 13:27:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:28:28.511 13:27:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114035 00:28:28.511 13:27:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113203 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:28.511 13:27:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:35.088 13:27:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:35.088 13:27:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:28:35.088 13:27:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:28:35.088 13:27:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:35.088 Attaching 4 probes... 00:28:35.088 @path[10.0.0.2, 4420]: 15760 00:28:35.088 @path[10.0.0.2, 4420]: 16097 00:28:35.088 @path[10.0.0.2, 4420]: 16139 00:28:35.088 @path[10.0.0.2, 4420]: 16135 00:28:35.088 @path[10.0.0.2, 4420]: 15972 00:28:35.088 13:27:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:35.088 13:27:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:35.088 13:27:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:35.088 13:27:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:28:35.088 13:27:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:28:35.088 13:27:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:28:35.088 13:27:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114035 00:28:35.088 13:27:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:35.088 13:27:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:35.088 [2024-07-15 13:27:31.756769] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:35.088 13:27:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:35.346 13:27:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:28:41.903 13:27:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:28:41.903 13:27:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114218 00:28:41.903 13:27:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113203 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:41.903 13:27:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:48.466 Attaching 4 probes... 00:28:48.466 @path[10.0.0.2, 4421]: 14944 00:28:48.466 @path[10.0.0.2, 4421]: 15018 00:28:48.466 @path[10.0.0.2, 4421]: 15357 00:28:48.466 @path[10.0.0.2, 4421]: 15165 00:28:48.466 @path[10.0.0.2, 4421]: 15051 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114218 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 113301 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 113301 ']' 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 113301 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 113301 00:28:48.466 killing process with pid 113301 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 113301' 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 113301 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 113301 00:28:48.466 Connection closed with partial response: 00:28:48.466 00:28:48.466 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 113301 00:28:48.466 13:27:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:48.466 [2024-07-15 13:26:47.207665] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:48.466 [2024-07-15 13:26:47.207789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113301 ] 00:28:48.466 [2024-07-15 13:26:47.343907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.466 [2024-07-15 13:26:47.455585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.466 Running I/O for 90 seconds... 00:28:48.466 [2024-07-15 13:26:56.993577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.993658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.993720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:117152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.993747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.993772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.993788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.993809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.993824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.993846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.993860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.993882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.993897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.993918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.993932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.993954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.993969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.993990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.994004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.994351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.994380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.994407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.994463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.994489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.994505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.994527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.994542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.994563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:117248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.994578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.994599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:117256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.994615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.994635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.994651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.994671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.994686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.994708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.994723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.994744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.994774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.994797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.994813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.994834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:117304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.994849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.994870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.994887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:48.466 [2024-07-15 13:26:56.994909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.466 [2024-07-15 13:26:56.994924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.994955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.467 [2024-07-15 13:26:56.994970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.994991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:117336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.467 [2024-07-15 13:26:56.995006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.995028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:117344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.467 [2024-07-15 13:26:56.995043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.995063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.467 [2024-07-15 13:26:56.995078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.995099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.467 [2024-07-15 13:26:56.995114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.995135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.467 [2024-07-15 13:26:56.995149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.995170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.467 [2024-07-15 13:26:56.995184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.995218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:117384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.467 [2024-07-15 13:26:56.995236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.995258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.467 [2024-07-15 13:26:56.995273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.995294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.467 [2024-07-15 13:26:56.995309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.995330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.467 [2024-07-15 13:26:56.995346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.995367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.467 [2024-07-15 13:26:56.995382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.995411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.467 [2024-07-15 13:26:56.995427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.995448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:117432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.467 [2024-07-15 13:26:56.995463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.995484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:117440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.467 [2024-07-15 13:26:56.995498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.995520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:117448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.467 [2024-07-15 13:26:56.995535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.995556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.467 [2024-07-15 13:26:56.995570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.995592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:116952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.467 [2024-07-15 13:26:56.995607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.995628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.467 [2024-07-15 13:26:56.995645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.995667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.467 [2024-07-15 13:26:56.995682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.999175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.467 [2024-07-15 13:26:56.999219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.999250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.467 [2024-07-15 13:26:56.999267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.999289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:116992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.467 [2024-07-15 13:26:56.999305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.999326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.467 [2024-07-15 13:26:56.999342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.999363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.467 [2024-07-15 13:26:56.999390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.999413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.467 [2024-07-15 13:26:56.999429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.999450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.467 [2024-07-15 13:26:56.999474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.999496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.467 [2024-07-15 13:26:56.999511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.999532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.467 [2024-07-15 13:26:56.999547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.999569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:117048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.467 [2024-07-15 13:26:56.999583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.999604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.467 [2024-07-15 13:26:56.999619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:48.467 [2024-07-15 13:26:56.999641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.468 [2024-07-15 13:26:56.999655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:26:56.999676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.468 [2024-07-15 13:26:56.999692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:26:56.999714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.468 [2024-07-15 13:26:56.999729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:26:56.999750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.468 [2024-07-15 13:26:56.999764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:26:56.999785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:117096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.468 [2024-07-15 13:26:56.999800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:26:56.999821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.468 [2024-07-15 13:26:56.999844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:26:56.999867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.468 [2024-07-15 13:26:56.999882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:26:56.999903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:117120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.468 [2024-07-15 13:26:56.999918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:26:56.999939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.468 [2024-07-15 13:26:56.999954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:26:56.999975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.468 [2024-07-15 13:26:56.999990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.651189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.651289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.651356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.651379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.651403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.651427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.651449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.651464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.651485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.651500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.651521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.651536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.651558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.651573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.651594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.651639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.651663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.651678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.651699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.651713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.651734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.651748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.651769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.651783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.651804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.651819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.651840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.651854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.651875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.651889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.651910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.651925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.651947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.651961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.651997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.652012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.652033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.652048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.652069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.652084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.652114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.652130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.652154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.652169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:48.468 [2024-07-15 13:27:03.652190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.468 [2024-07-15 13:27:03.652218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.652243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.652258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.652280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.652295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.652316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.652331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.652352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.652367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.652389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.652403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.652425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.652439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.652461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.652476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.652497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.652512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.652533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.652548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.652578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.652594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.652622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.652637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.652659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.652674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.652695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.652711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.652732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.652747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.652768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.652783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.652804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.652818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.652840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.652855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.652876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.652891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.652912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.652927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.652948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.652963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.652984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.652998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.653020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.653046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.653068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.653084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.653105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.653120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.653142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.653157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.653178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.653193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.653234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.653252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.653273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.653288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.653320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.653334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.653356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.653371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.653392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.653406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.653427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.653442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.653463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.653478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:48.469 [2024-07-15 13:27:03.653499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.469 [2024-07-15 13:27:03.653521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.653544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.653559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.653580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.653594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.653615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.653639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.653661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.653676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.653933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.653961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.653995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.654013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.654055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.654097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.470 [2024-07-15 13:27:03.654140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.470 [2024-07-15 13:27:03.654181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.470 [2024-07-15 13:27:03.654241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.470 [2024-07-15 13:27:03.654291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.470 [2024-07-15 13:27:03.654347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.470 [2024-07-15 13:27:03.654394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.470 [2024-07-15 13:27:03.654435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.654476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.654518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.654558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.654606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.654647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.654688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.654729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.654785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.654828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.654879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.654927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.654968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.654994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.655010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.655036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.655052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.655078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.655093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.655119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.470 [2024-07-15 13:27:03.655135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:48.470 [2024-07-15 13:27:03.655161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.471 [2024-07-15 13:27:03.655176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.655203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.471 [2024-07-15 13:27:03.655231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.655258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.471 [2024-07-15 13:27:03.655274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.655300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.471 [2024-07-15 13:27:03.655321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.655348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.471 [2024-07-15 13:27:03.655363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.655390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.471 [2024-07-15 13:27:03.655412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.655439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.471 [2024-07-15 13:27:03.655455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.655481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.471 [2024-07-15 13:27:03.655496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.655523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.471 [2024-07-15 13:27:03.655538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.655564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.471 [2024-07-15 13:27:03.655579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.655606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.471 [2024-07-15 13:27:03.655621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.655647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.471 [2024-07-15 13:27:03.655662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.655688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.471 [2024-07-15 13:27:03.655703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.655730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.471 [2024-07-15 13:27:03.655745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.655772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.471 [2024-07-15 13:27:03.655787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.655813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.471 [2024-07-15 13:27:03.655828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.655854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.471 [2024-07-15 13:27:03.655869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.655895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.471 [2024-07-15 13:27:03.655916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.655944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.471 [2024-07-15 13:27:03.655959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.655986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.471 [2024-07-15 13:27:03.656001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.656027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.471 [2024-07-15 13:27:03.656042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.656068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.471 [2024-07-15 13:27:03.656083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.656109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.471 [2024-07-15 13:27:03.656124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.656150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.471 [2024-07-15 13:27:03.656165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:48.471 [2024-07-15 13:27:03.656191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.472 [2024-07-15 13:27:03.656218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:03.656248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.472 [2024-07-15 13:27:03.656263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:03.656290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.472 [2024-07-15 13:27:03.656305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:03.656331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.472 [2024-07-15 13:27:03.656346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:03.656372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.472 [2024-07-15 13:27:03.656387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:03.656414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.472 [2024-07-15 13:27:03.656429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:03.656463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.472 [2024-07-15 13:27:03.656479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:03.656505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.472 [2024-07-15 13:27:03.656520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:03.656546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.472 [2024-07-15 13:27:03.656561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:03.656587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.472 [2024-07-15 13:27:03.656602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:03.656628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.472 [2024-07-15 13:27:03.656643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:03.656669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.472 [2024-07-15 13:27:03.656684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:03.656976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.472 [2024-07-15 13:27:03.657004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:03.657039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.472 [2024-07-15 13:27:03.657057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:03.657089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.472 [2024-07-15 13:27:03.657104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:03.657135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.472 [2024-07-15 13:27:03.657150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:10.799828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.472 [2024-07-15 13:27:10.799921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:10.799984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.472 [2024-07-15 13:27:10.800008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:10.800067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.472 [2024-07-15 13:27:10.800085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:10.800107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.472 [2024-07-15 13:27:10.800122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:10.800143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.472 [2024-07-15 13:27:10.800158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:10.800180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.472 [2024-07-15 13:27:10.800194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:10.800233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.472 [2024-07-15 13:27:10.800250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:10.800272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.472 [2024-07-15 13:27:10.800287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:10.800384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.472 [2024-07-15 13:27:10.800409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:10.800436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.472 [2024-07-15 13:27:10.800462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:10.800486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.472 [2024-07-15 13:27:10.800500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:10.800524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.472 [2024-07-15 13:27:10.800539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:10.800560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.472 [2024-07-15 13:27:10.800575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:10.800597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.472 [2024-07-15 13:27:10.800612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:10.800633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.472 [2024-07-15 13:27:10.800661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:10.800686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.472 [2024-07-15 13:27:10.800702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:10.800760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.472 [2024-07-15 13:27:10.800781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:48.472 [2024-07-15 13:27:10.800807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.473 [2024-07-15 13:27:10.800823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.800846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.473 [2024-07-15 13:27:10.800864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.800887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.473 [2024-07-15 13:27:10.800902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.800924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.473 [2024-07-15 13:27:10.800940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.800962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.473 [2024-07-15 13:27:10.800977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.800999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.473 [2024-07-15 13:27:10.801014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.801037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.473 [2024-07-15 13:27:10.801052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.801937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.473 [2024-07-15 13:27:10.801963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.801993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.473 [2024-07-15 13:27:10.802010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.473 [2024-07-15 13:27:10.802060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.473 [2024-07-15 13:27:10.802101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.473 [2024-07-15 13:27:10.802140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.473 [2024-07-15 13:27:10.802178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.473 [2024-07-15 13:27:10.802231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.473 [2024-07-15 13:27:10.802271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.473 [2024-07-15 13:27:10.802309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.473 [2024-07-15 13:27:10.802348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.473 [2024-07-15 13:27:10.802387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.473 [2024-07-15 13:27:10.802425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.473 [2024-07-15 13:27:10.802464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.473 [2024-07-15 13:27:10.802502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.473 [2024-07-15 13:27:10.802541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.473 [2024-07-15 13:27:10.802588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.473 [2024-07-15 13:27:10.802627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.473 [2024-07-15 13:27:10.802666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.473 [2024-07-15 13:27:10.802703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.473 [2024-07-15 13:27:10.802741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.473 [2024-07-15 13:27:10.802801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.473 [2024-07-15 13:27:10.802841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.473 [2024-07-15 13:27:10.802880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.473 [2024-07-15 13:27:10.802918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.473 [2024-07-15 13:27:10.802955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:48.473 [2024-07-15 13:27:10.802979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.474 [2024-07-15 13:27:10.802994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.803017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.474 [2024-07-15 13:27:10.803033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.803064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.474 [2024-07-15 13:27:10.803145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.803171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.474 [2024-07-15 13:27:10.803187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.803223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.474 [2024-07-15 13:27:10.803241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.803265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.474 [2024-07-15 13:27:10.803287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.803310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.474 [2024-07-15 13:27:10.803325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.803348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.474 [2024-07-15 13:27:10.803363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.803387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.474 [2024-07-15 13:27:10.803402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.803509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.474 [2024-07-15 13:27:10.803531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.803560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.474 [2024-07-15 13:27:10.803577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.803605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.474 [2024-07-15 13:27:10.803620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.803647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.474 [2024-07-15 13:27:10.803662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.803689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.474 [2024-07-15 13:27:10.803704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.803730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.474 [2024-07-15 13:27:10.803755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.803783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.474 [2024-07-15 13:27:10.803799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.803825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.474 [2024-07-15 13:27:10.803841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.803867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.474 [2024-07-15 13:27:10.803883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.803909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.474 [2024-07-15 13:27:10.803925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.803951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.474 [2024-07-15 13:27:10.803966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.803992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.474 [2024-07-15 13:27:10.804008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.804035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.474 [2024-07-15 13:27:10.804050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.804154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.474 [2024-07-15 13:27:10.804177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.804223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.474 [2024-07-15 13:27:10.804243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.804272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.474 [2024-07-15 13:27:10.804288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.804315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.474 [2024-07-15 13:27:10.804330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.804358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.474 [2024-07-15 13:27:10.804383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.804412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.474 [2024-07-15 13:27:10.804427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.804455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.474 [2024-07-15 13:27:10.804471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.804500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.474 [2024-07-15 13:27:10.804515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.804543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.474 [2024-07-15 13:27:10.804558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.804586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.474 [2024-07-15 13:27:10.804601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.804628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.474 [2024-07-15 13:27:10.804643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.804670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.474 [2024-07-15 13:27:10.804694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:48.474 [2024-07-15 13:27:10.804724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.475 [2024-07-15 13:27:10.804739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.804766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.475 [2024-07-15 13:27:10.804781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.804809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.475 [2024-07-15 13:27:10.804832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.804861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.475 [2024-07-15 13:27:10.804876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.804904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.475 [2024-07-15 13:27:10.804919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.804954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.475 [2024-07-15 13:27:10.804970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.804998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.475 [2024-07-15 13:27:10.805013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.475 [2024-07-15 13:27:10.805055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.475 [2024-07-15 13:27:10.805098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.475 [2024-07-15 13:27:10.805141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.475 [2024-07-15 13:27:10.805183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.475 [2024-07-15 13:27:10.805242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.475 [2024-07-15 13:27:10.805285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.475 [2024-07-15 13:27:10.805327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.475 [2024-07-15 13:27:10.805370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.475 [2024-07-15 13:27:10.805412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.475 [2024-07-15 13:27:10.805455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.475 [2024-07-15 13:27:10.805510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.475 [2024-07-15 13:27:10.805561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.475 [2024-07-15 13:27:10.805604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.475 [2024-07-15 13:27:10.805646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.475 [2024-07-15 13:27:10.805688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.475 [2024-07-15 13:27:10.805730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.475 [2024-07-15 13:27:10.805774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.475 [2024-07-15 13:27:10.805816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.475 [2024-07-15 13:27:10.805857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.475 [2024-07-15 13:27:10.805899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.475 [2024-07-15 13:27:10.805941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.805969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.475 [2024-07-15 13:27:10.805983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.806011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.475 [2024-07-15 13:27:10.806032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.806061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.475 [2024-07-15 13:27:10.806077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.806104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.475 [2024-07-15 13:27:10.806120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.806147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.475 [2024-07-15 13:27:10.806162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:48.475 [2024-07-15 13:27:10.806189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.475 [2024-07-15 13:27:10.806214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:10.806245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:10.806266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:10.806293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:10.806309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:10.806337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:10.806352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:10.806379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:10.806394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:10.806422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:10.806438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:10.806464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:10.806479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:10.806507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:10.806521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:10.806548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:10.806570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:10.806599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:10.806615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:10.806641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:10.806656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:10.806684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:10.806699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:24.153571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.476 [2024-07-15 13:27:24.153644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:24.153706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:24.153728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:24.153753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:24.153769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:24.153792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:24.153808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:24.153829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:24.153845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:24.153867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:24.153883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:24.153905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:24.153920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:24.153941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:24.153956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:24.153978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:24.153993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:24.154050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:24.154067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:24.154089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:24.154103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:24.154125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:24.154140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:24.154161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:24.154176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:24.154198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:24.154228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:24.154252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.476 [2024-07-15 13:27:24.154267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:48.476 [2024-07-15 13:27:24.154289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.154304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.154326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.154341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.154585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.154612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.154639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.154655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.154677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.154692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.154713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.154728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.154774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.154793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.154816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.154831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.154852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.154867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.154887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.154902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.154923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.154938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.154959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.154973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.154994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.155030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.155068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.155103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.155138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.155173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.155231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.155270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.155306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.155342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.155378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.155413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.155450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.155486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.155521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.155557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.155592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.155628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.155664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.155708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.155744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:48.477 [2024-07-15 13:27:24.155779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.477 [2024-07-15 13:27:24.155794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.156874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.478 [2024-07-15 13:27:24.156904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.156925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.478 [2024-07-15 13:27:24.156941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.156959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.478 [2024-07-15 13:27:24.156973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.156988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.478 [2024-07-15 13:27:24.157002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.478 [2024-07-15 13:27:24.157031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.478 [2024-07-15 13:27:24.157060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.478 [2024-07-15 13:27:24.157088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.478 [2024-07-15 13:27:24.157117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.478 [2024-07-15 13:27:24.157145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.478 [2024-07-15 13:27:24.157879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.478 [2024-07-15 13:27:24.157894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.479 [2024-07-15 13:27:24.157907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.157928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.479 [2024-07-15 13:27:24.157941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.157957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.479 [2024-07-15 13:27:24.157970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.157985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.479 [2024-07-15 13:27:24.157998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.479 [2024-07-15 13:27:24.158026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.479 [2024-07-15 13:27:24.158055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.479 [2024-07-15 13:27:24.158082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.479 [2024-07-15 13:27:24.158116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.479 [2024-07-15 13:27:24.158171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.479 [2024-07-15 13:27:24.158227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.479 [2024-07-15 13:27:24.158277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.479 [2024-07-15 13:27:24.158307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.479 [2024-07-15 13:27:24.158334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.479 [2024-07-15 13:27:24.158373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.479 [2024-07-15 13:27:24.158402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.479 [2024-07-15 13:27:24.158431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.479 [2024-07-15 13:27:24.158468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.479 [2024-07-15 13:27:24.158497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.479 [2024-07-15 13:27:24.158525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.479 [2024-07-15 13:27:24.158553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.479 [2024-07-15 13:27:24.158580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.479 [2024-07-15 13:27:24.158608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.479 [2024-07-15 13:27:24.158636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.479 [2024-07-15 13:27:24.158664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.479 [2024-07-15 13:27:24.158692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.479 [2024-07-15 13:27:24.158721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.479 [2024-07-15 13:27:24.158766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.479 [2024-07-15 13:27:24.158798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.479 [2024-07-15 13:27:24.158827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.479 [2024-07-15 13:27:24.158855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.479 [2024-07-15 13:27:24.158883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.479 [2024-07-15 13:27:24.158904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1f900 is same with the state(5) to be set 00:28:48.479 [2024-07-15 13:27:24.159097] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e1f900 was disconnected and freed. reset controller. 00:28:48.479 [2024-07-15 13:27:24.159975] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.479 [2024-07-15 13:27:24.160059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.480 [2024-07-15 13:27:24.160083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.480 [2024-07-15 13:27:24.160115] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e26b70 (9): Bad file descriptor 00:28:48.480 [2024-07-15 13:27:24.161514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.480 [2024-07-15 13:27:24.161548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e26b70 with addr=10.0.0.2, port=4421 00:28:48.480 [2024-07-15 13:27:24.161565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e26b70 is same with the state(5) to be set 00:28:48.480 [2024-07-15 13:27:24.161687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e26b70 (9): Bad file descriptor 00:28:48.480 [2024-07-15 13:27:24.161825] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.480 [2024-07-15 13:27:24.161849] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.480 [2024-07-15 13:27:24.161865] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.480 [2024-07-15 13:27:24.161970] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.480 [2024-07-15 13:27:24.161990] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.480 [2024-07-15 13:27:34.232586] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:48.480 Received shutdown signal, test time was about 55.578043 seconds 00:28:48.480 00:28:48.480 Latency(us) 00:28:48.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.480 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:48.480 Verification LBA range: start 0x0 length 0x4000 00:28:48.480 Nvme0n1 : 55.58 6675.47 26.08 0.00 0.00 19140.15 1273.48 7015926.69 00:28:48.480 =================================================================================================================== 00:28:48.480 Total : 6675.47 26.08 0.00 0.00 19140.15 1273.48 7015926.69 00:28:48.480 13:27:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:48.480 13:27:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:28:48.480 13:27:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:48.480 13:27:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:28:48.480 13:27:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:48.480 13:27:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:28:48.480 13:27:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:48.480 13:27:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:28:48.480 13:27:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:48.480 13:27:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:48.480 rmmod nvme_tcp 00:28:48.480 rmmod nvme_fabrics 00:28:48.480 rmmod nvme_keyring 00:28:48.480 13:27:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:48.480 13:27:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:28:48.480 13:27:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:28:48.480 13:27:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 113203 ']' 00:28:48.480 13:27:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 113203 00:28:48.480 13:27:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 113203 ']' 00:28:48.480 13:27:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 113203 00:28:48.480 13:27:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:28:48.480 13:27:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:48.480 13:27:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 113203 00:28:48.480 13:27:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:48.480 killing process with pid 113203 00:28:48.480 13:27:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:48.480 13:27:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 113203' 00:28:48.480 13:27:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 113203 00:28:48.480 13:27:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 113203 00:28:48.738 13:27:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:48.738 13:27:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:48.738 13:27:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:48.738 13:27:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:48.738 13:27:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:48.738 13:27:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.738 13:27:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:48.738 13:27:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.738 13:27:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:48.738 00:28:48.738 real 1m1.533s 00:28:48.738 user 2m54.383s 00:28:48.738 sys 0m13.557s 00:28:48.739 13:27:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:48.739 ************************************ 00:28:48.739 13:27:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:48.739 END TEST nvmf_host_multipath 00:28:48.739 ************************************ 00:28:48.739 13:27:45 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:48.739 13:27:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:48.739 13:27:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:48.739 13:27:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:48.739 ************************************ 00:28:48.739 START TEST nvmf_timeout 00:28:48.739 ************************************ 00:28:48.739 13:27:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:48.739 * Looking for test storage... 00:28:48.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:48.996 13:27:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:48.996 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:48.996 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.996 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.996 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.996 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.996 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.996 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.996 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.996 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.996 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.996 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.996 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:28:48.996 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:48.997 Cannot find device "nvmf_tgt_br" 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:48.997 Cannot find device "nvmf_tgt_br2" 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:48.997 Cannot find device "nvmf_tgt_br" 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:48.997 Cannot find device "nvmf_tgt_br2" 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:48.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:48.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:48.997 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:49.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:49.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:28:49.255 00:28:49.255 --- 10.0.0.2 ping statistics --- 00:28:49.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.255 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:49.255 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:49.255 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:28:49.255 00:28:49.255 --- 10.0.0.3 ping statistics --- 00:28:49.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.255 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:49.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:49.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:28:49.255 00:28:49.255 --- 10.0.0.1 ping statistics --- 00:28:49.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.255 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=114540 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 114540 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 114540 ']' 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:49.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:49.255 13:27:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:49.255 [2024-07-15 13:27:45.949862] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:49.255 [2024-07-15 13:27:45.949961] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:49.513 [2024-07-15 13:27:46.090100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:49.513 [2024-07-15 13:27:46.191550] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:49.513 [2024-07-15 13:27:46.191630] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:49.513 [2024-07-15 13:27:46.191645] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:49.513 [2024-07-15 13:27:46.191656] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:49.513 [2024-07-15 13:27:46.191665] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:49.513 [2024-07-15 13:27:46.191850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.513 [2024-07-15 13:27:46.191864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.447 13:27:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:50.447 13:27:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:28:50.447 13:27:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:50.447 13:27:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:50.447 13:27:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:50.447 13:27:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:50.447 13:27:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:50.447 13:27:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:50.705 [2024-07-15 13:27:47.220378] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:50.705 13:27:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:50.962 Malloc0 00:28:50.962 13:27:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:51.220 13:27:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:51.478 13:27:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:51.736 [2024-07-15 13:27:48.299178] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.736 13:27:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=114631 00:28:51.736 13:27:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 114631 /var/tmp/bdevperf.sock 00:28:51.736 13:27:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 114631 ']' 00:28:51.736 13:27:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:51.736 13:27:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:51.736 13:27:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:51.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:51.736 13:27:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:51.736 13:27:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:51.736 13:27:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:51.736 [2024-07-15 13:27:48.375571] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:51.736 [2024-07-15 13:27:48.375684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114631 ] 00:28:51.994 [2024-07-15 13:27:48.514865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.994 [2024-07-15 13:27:48.613890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:52.959 13:27:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:52.959 13:27:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:28:52.959 13:27:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:52.959 13:27:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:28:53.216 NVMe0n1 00:28:53.216 13:27:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=114679 00:28:53.216 13:27:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:53.216 13:27:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:28:53.474 Running I/O for 10 seconds... 00:28:54.408 13:27:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:54.408 [2024-07-15 13:27:51.119076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119305] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119344] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.119594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96d60 is same with the state(5) to be set 00:28:54.408 [2024-07-15 13:27:51.120659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.408 [2024-07-15 13:27:51.120701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.408 [2024-07-15 13:27:51.120724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.408 [2024-07-15 13:27:51.120736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.408 [2024-07-15 13:27:51.120748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.408 [2024-07-15 13:27:51.120757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.408 [2024-07-15 13:27:51.120768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.408 [2024-07-15 13:27:51.120777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.408 [2024-07-15 13:27:51.120790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.408 [2024-07-15 13:27:51.120799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.408 [2024-07-15 13:27:51.120810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.408 [2024-07-15 13:27:51.120819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.408 [2024-07-15 13:27:51.120830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.408 [2024-07-15 13:27:51.120839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.408 [2024-07-15 13:27:51.120850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.408 [2024-07-15 13:27:51.120858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.408 [2024-07-15 13:27:51.120869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.408 [2024-07-15 13:27:51.120878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.408 [2024-07-15 13:27:51.120889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.408 [2024-07-15 13:27:51.120898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.408 [2024-07-15 13:27:51.120908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.408 [2024-07-15 13:27:51.120917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.408 [2024-07-15 13:27:51.120928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.408 [2024-07-15 13:27:51.120937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.408 [2024-07-15 13:27:51.120949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.408 [2024-07-15 13:27:51.120958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.408 [2024-07-15 13:27:51.120968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.408 [2024-07-15 13:27:51.120977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.120989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.120998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.409 [2024-07-15 13:27:51.121140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.409 [2024-07-15 13:27:51.121160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.409 [2024-07-15 13:27:51.121180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.409 [2024-07-15 13:27:51.121200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.409 [2024-07-15 13:27:51.121235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.409 [2024-07-15 13:27:51.121256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.409 [2024-07-15 13:27:51.121276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.409 [2024-07-15 13:27:51.121296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.409 [2024-07-15 13:27:51.121834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.409 [2024-07-15 13:27:51.121845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.121854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.121865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.121874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.121885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.121894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.121905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.121914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.121925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.121933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.121944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.121954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.121966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.121975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.121986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.121996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.410 [2024-07-15 13:27:51.122609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.410 [2024-07-15 13:27:51.122630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.410 [2024-07-15 13:27:51.122649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.410 [2024-07-15 13:27:51.122669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.410 [2024-07-15 13:27:51.122690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.410 [2024-07-15 13:27:51.122700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.410 [2024-07-15 13:27:51.122710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.122720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.122729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.122740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.122749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.122769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.122779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.122790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.122799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.122810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.122819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.122830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.122839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.122850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.122859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.122869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.122878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.122889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.122907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.122918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.122927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.122938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.122947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.122958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.122968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.122978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.122987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.122998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.123007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.123018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.123027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.123038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.123047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.123058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.123068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.123079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.123088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.123098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.123108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.123118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.123128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.123138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.123147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.123157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.123166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.123177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.123194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.123213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.123224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.123235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.123259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.123270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.123279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.123290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.411 [2024-07-15 13:27:51.123299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.123335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:54.411 [2024-07-15 13:27:51.123346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81224 len:8 PRP1 0x0 PRP2 0x0 00:28:54.411 [2024-07-15 13:27:51.123356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.123369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:54.411 [2024-07-15 13:27:51.123377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:54.411 [2024-07-15 13:27:51.123385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81232 len:8 PRP1 0x0 PRP2 0x0 00:28:54.411 [2024-07-15 13:27:51.123395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.123449] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22fad40 was disconnected and freed. reset controller. 00:28:54.411 [2024-07-15 13:27:51.123544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.411 [2024-07-15 13:27:51.123561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.123571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.411 [2024-07-15 13:27:51.123581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.123591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.411 [2024-07-15 13:27:51.123600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.123609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.411 [2024-07-15 13:27:51.123618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.411 [2024-07-15 13:27:51.123628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c7c60 is same with the state(5) to be set 00:28:54.411 [2024-07-15 13:27:51.123859] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.411 [2024-07-15 13:27:51.123891] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c7c60 (9): Bad file descriptor 00:28:54.411 [2024-07-15 13:27:51.123993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.411 [2024-07-15 13:27:51.124013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c7c60 with addr=10.0.0.2, port=4420 00:28:54.411 [2024-07-15 13:27:51.124024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c7c60 is same with the state(5) to be set 00:28:54.411 [2024-07-15 13:27:51.124042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c7c60 (9): Bad file descriptor 00:28:54.411 [2024-07-15 13:27:51.124057] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.411 [2024-07-15 13:27:51.124066] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.411 [2024-07-15 13:27:51.124076] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.411 [2024-07-15 13:27:51.124096] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.411 [2024-07-15 13:27:51.124113] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.411 13:27:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:28:56.937 [2024-07-15 13:27:53.124472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.937 [2024-07-15 13:27:53.124560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c7c60 with addr=10.0.0.2, port=4420 00:28:56.937 [2024-07-15 13:27:53.124577] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c7c60 is same with the state(5) to be set 00:28:56.937 [2024-07-15 13:27:53.124606] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c7c60 (9): Bad file descriptor 00:28:56.937 [2024-07-15 13:27:53.124627] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.937 [2024-07-15 13:27:53.124638] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.937 [2024-07-15 13:27:53.124649] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.937 [2024-07-15 13:27:53.124679] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.937 [2024-07-15 13:27:53.124692] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.937 13:27:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:28:56.937 13:27:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:56.937 13:27:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:56.937 13:27:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:28:56.937 13:27:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:28:56.937 13:27:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:56.937 13:27:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:57.194 13:27:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:28:57.194 13:27:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:28:58.565 [2024-07-15 13:27:55.124903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.565 [2024-07-15 13:27:55.124977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c7c60 with addr=10.0.0.2, port=4420 00:28:58.565 [2024-07-15 13:27:55.124994] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c7c60 is same with the state(5) to be set 00:28:58.565 [2024-07-15 13:27:55.125024] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c7c60 (9): Bad file descriptor 00:28:58.565 [2024-07-15 13:27:55.125055] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.565 [2024-07-15 13:27:55.125067] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.565 [2024-07-15 13:27:55.125079] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.565 [2024-07-15 13:27:55.125108] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.565 [2024-07-15 13:27:55.125120] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.460 [2024-07-15 13:27:57.125296] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.460 [2024-07-15 13:27:57.125367] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.460 [2024-07-15 13:27:57.125393] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.460 [2024-07-15 13:27:57.125412] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:29:00.460 [2024-07-15 13:27:57.125459] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.392 00:29:01.392 Latency(us) 00:29:01.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.392 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:01.392 Verification LBA range: start 0x0 length 0x4000 00:29:01.392 NVMe0n1 : 8.14 1231.44 4.81 15.72 0.00 102487.33 2144.81 7015926.69 00:29:01.392 =================================================================================================================== 00:29:01.392 Total : 1231.44 4.81 15.72 0.00 102487.33 2144.81 7015926.69 00:29:01.392 0 00:29:02.324 13:27:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:29:02.324 13:27:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:02.324 13:27:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:29:02.324 13:27:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:29:02.324 13:27:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:29:02.324 13:27:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:29:02.324 13:27:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:29:02.582 13:27:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:29:02.582 13:27:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 114679 00:29:02.582 13:27:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 114631 00:29:02.582 13:27:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 114631 ']' 00:29:02.582 13:27:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 114631 00:29:02.582 13:27:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:29:02.582 13:27:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:02.582 13:27:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 114631 00:29:02.582 13:27:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:29:02.582 13:27:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:29:02.582 killing process with pid 114631 00:29:02.582 13:27:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 114631' 00:29:02.582 13:27:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 114631 00:29:02.582 Received shutdown signal, test time was about 9.275376 seconds 00:29:02.582 00:29:02.582 Latency(us) 00:29:02.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.582 =================================================================================================================== 00:29:02.582 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:02.582 13:27:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 114631 00:29:02.840 13:27:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:03.098 [2024-07-15 13:27:59.727089] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:03.098 13:27:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=114836 00:29:03.098 13:27:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:29:03.098 13:27:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 114836 /var/tmp/bdevperf.sock 00:29:03.098 13:27:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 114836 ']' 00:29:03.099 13:27:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:03.099 13:27:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:03.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:03.099 13:27:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:03.099 13:27:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:03.099 13:27:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:03.099 [2024-07-15 13:27:59.794937] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:03.099 [2024-07-15 13:27:59.795038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114836 ] 00:29:03.357 [2024-07-15 13:27:59.925756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.357 [2024-07-15 13:28:00.026333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:04.291 13:28:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:04.291 13:28:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:29:04.291 13:28:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:04.548 13:28:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:29:04.806 NVMe0n1 00:29:04.806 13:28:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=114879 00:29:04.806 13:28:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:04.806 13:28:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:29:05.064 Running I/O for 10 seconds... 00:29:06.001 13:28:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.001 [2024-07-15 13:28:02.698693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9bd50 is same with the state(5) to be set 00:29:06.001 [2024-07-15 13:28:02.698747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9bd50 is same with the state(5) to be set 00:29:06.001 [2024-07-15 13:28:02.698768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9bd50 is same with the state(5) to be set 00:29:06.001 [2024-07-15 13:28:02.698798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9bd50 is same with the state(5) to be set 00:29:06.001 [2024-07-15 13:28:02.698806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9bd50 is same with the state(5) to be set 00:29:06.001 [2024-07-15 13:28:02.698815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9bd50 is same with the state(5) to be set 00:29:06.001 [2024-07-15 13:28:02.698823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9bd50 is same with the state(5) to be set 00:29:06.001 [2024-07-15 13:28:02.698831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9bd50 is same with the state(5) to be set 00:29:06.001 [2024-07-15 13:28:02.698840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9bd50 is same with the state(5) to be set 00:29:06.001 [2024-07-15 13:28:02.698848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9bd50 is same with the state(5) to be set 00:29:06.001 [2024-07-15 13:28:02.698856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9bd50 is same with the state(5) to be set 00:29:06.001 [2024-07-15 13:28:02.698864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9bd50 is same with the state(5) to be set 00:29:06.001 [2024-07-15 13:28:02.698872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9bd50 is same with the state(5) to be set 00:29:06.001 [2024-07-15 13:28:02.698880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9bd50 is same with the state(5) to be set 00:29:06.001 [2024-07-15 13:28:02.698888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9bd50 is same with the state(5) to be set 00:29:06.001 [2024-07-15 13:28:02.698896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9bd50 is same with the state(5) to be set 00:29:06.001 [2024-07-15 13:28:02.699524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.001 [2024-07-15 13:28:02.699576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.001 [2024-07-15 13:28:02.699612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.001 [2024-07-15 13:28:02.699632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.001 [2024-07-15 13:28:02.699651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.001 [2024-07-15 13:28:02.699668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.001 [2024-07-15 13:28:02.699686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.001 [2024-07-15 13:28:02.699701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.001 [2024-07-15 13:28:02.699719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.001 [2024-07-15 13:28:02.699735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.001 [2024-07-15 13:28:02.699753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.001 [2024-07-15 13:28:02.699768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.001 [2024-07-15 13:28:02.699786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.001 [2024-07-15 13:28:02.699801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.001 [2024-07-15 13:28:02.699818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.001 [2024-07-15 13:28:02.699834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.001 [2024-07-15 13:28:02.699852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.699868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.699886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.699902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.699919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.699934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.699951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.699966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.699985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.700975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.700992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.701008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.701027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.701042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.701061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.701076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.701094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.701109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.701127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.701143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.701162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.701177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.701195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.701229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.701248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.701264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.701283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.701298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.002 [2024-07-15 13:28:02.701316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.002 [2024-07-15 13:28:02.701333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.701350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.701366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.701386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.701401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.701419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.701436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.701453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.701469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.701487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.701502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.701520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.701536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.701554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.701570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.701589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.701605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.701623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.701641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.701658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.701673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.701691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.701707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.701726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.701742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.701760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.701775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.701793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.701808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.701826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.701841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.701859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.701875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.701893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.701909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.701927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.701942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.701960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.701976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.701995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.702010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.702043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.702077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.702122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.702156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.702189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.702239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.702286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.003 [2024-07-15 13:28:02.702321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.003 [2024-07-15 13:28:02.702354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.003 [2024-07-15 13:28:02.702387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.003 [2024-07-15 13:28:02.702421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.003 [2024-07-15 13:28:02.702454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.003 [2024-07-15 13:28:02.702487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.003 [2024-07-15 13:28:02.702521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.003 [2024-07-15 13:28:02.702555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.003 [2024-07-15 13:28:02.702588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.003 [2024-07-15 13:28:02.702621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.003 [2024-07-15 13:28:02.702655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.003 [2024-07-15 13:28:02.702689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.003 [2024-07-15 13:28:02.702724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.003 [2024-07-15 13:28:02.702758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.003 [2024-07-15 13:28:02.702791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.003 [2024-07-15 13:28:02.702818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.702838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.702854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.702871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.702888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.702906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.702922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.702940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.702955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.702974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.702989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.703024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.703057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.703092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.703125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.703158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.004 [2024-07-15 13:28:02.703191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.004 [2024-07-15 13:28:02.703240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.004 [2024-07-15 13:28:02.703275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.004 [2024-07-15 13:28:02.703319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.004 [2024-07-15 13:28:02.703353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.004 [2024-07-15 13:28:02.703387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.004 [2024-07-15 13:28:02.703420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.004 [2024-07-15 13:28:02.703453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.004 [2024-07-15 13:28:02.703488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.703523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.703559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.703593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.703626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.703660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.703694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.703728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.703762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.703795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.703828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.703861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.703894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.703933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.703966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.703983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.004 [2024-07-15 13:28:02.703999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.704038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:06.004 [2024-07-15 13:28:02.704069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:06.004 [2024-07-15 13:28:02.704084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81584 len:8 PRP1 0x0 PRP2 0x0 00:29:06.004 [2024-07-15 13:28:02.704100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.704170] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5f4d70 was disconnected and freed. reset controller. 00:29:06.004 [2024-07-15 13:28:02.704316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:06.004 [2024-07-15 13:28:02.704354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.704378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:06.004 [2024-07-15 13:28:02.704394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.704409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:06.004 [2024-07-15 13:28:02.704424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.704439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:06.004 [2024-07-15 13:28:02.704455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.004 [2024-07-15 13:28:02.704471] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1aa0 is same with the state(5) to be set 00:29:06.004 [2024-07-15 13:28:02.704763] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.004 [2024-07-15 13:28:02.704813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1aa0 (9): Bad file descriptor 00:29:06.004 [2024-07-15 13:28:02.704968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-07-15 13:28:02.705011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1aa0 with addr=10.0.0.2, port=4420 00:29:06.005 [2024-07-15 13:28:02.705030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1aa0 is same with the state(5) to be set 00:29:06.005 [2024-07-15 13:28:02.705058] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1aa0 (9): Bad file descriptor 00:29:06.005 [2024-07-15 13:28:02.705084] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.005 [2024-07-15 13:28:02.705100] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.005 [2024-07-15 13:28:02.705117] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.005 [2024-07-15 13:28:02.705147] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.005 [2024-07-15 13:28:02.705176] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.005 13:28:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:29:07.378 [2024-07-15 13:28:03.705348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.378 [2024-07-15 13:28:03.705413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1aa0 with addr=10.0.0.2, port=4420 00:29:07.378 [2024-07-15 13:28:03.705440] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1aa0 is same with the state(5) to be set 00:29:07.378 [2024-07-15 13:28:03.705476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1aa0 (9): Bad file descriptor 00:29:07.378 [2024-07-15 13:28:03.705504] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.378 [2024-07-15 13:28:03.705519] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.378 [2024-07-15 13:28:03.705536] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.378 [2024-07-15 13:28:03.705575] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.378 [2024-07-15 13:28:03.705594] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.378 13:28:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:07.378 [2024-07-15 13:28:03.967076] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.378 13:28:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 114879 00:29:08.312 [2024-07-15 13:28:04.718916] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:14.868 00:29:14.868 Latency(us) 00:29:14.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.868 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:14.868 Verification LBA range: start 0x0 length 0x4000 00:29:14.868 NVMe0n1 : 10.01 6307.84 24.64 0.00 0.00 20247.66 2085.24 3019898.88 00:29:14.868 =================================================================================================================== 00:29:14.868 Total : 6307.84 24.64 0.00 0.00 20247.66 2085.24 3019898.88 00:29:14.868 0 00:29:14.868 13:28:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=114991 00:29:14.868 13:28:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:14.868 13:28:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:29:15.126 Running I/O for 10 seconds... 00:29:16.060 13:28:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:16.321 [2024-07-15 13:28:12.845551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf4c60 is same with the state(5) to be set 00:29:16.321 [2024-07-15 13:28:12.845634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf4c60 is same with the state(5) to be set 00:29:16.321 [2024-07-15 13:28:12.845646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf4c60 is same with the state(5) to be set 00:29:16.321 [2024-07-15 13:28:12.845654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf4c60 is same with the state(5) to be set 00:29:16.321 [2024-07-15 13:28:12.845662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf4c60 is same with the state(5) to be set 00:29:16.321 [2024-07-15 13:28:12.845669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf4c60 is same with the state(5) to be set 00:29:16.321 [2024-07-15 13:28:12.845678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf4c60 is same with the state(5) to be set 00:29:16.321 [2024-07-15 13:28:12.846149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.321 [2024-07-15 13:28:12.846187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.846216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.321 [2024-07-15 13:28:12.846234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.846253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.321 [2024-07-15 13:28:12.846286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.846310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.321 [2024-07-15 13:28:12.846333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.846351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.321 [2024-07-15 13:28:12.846368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.846385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.321 [2024-07-15 13:28:12.846401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.846420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.321 [2024-07-15 13:28:12.846435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.846453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.321 [2024-07-15 13:28:12.846468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.846487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.321 [2024-07-15 13:28:12.846502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.846519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.321 [2024-07-15 13:28:12.846537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.846570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.321 [2024-07-15 13:28:12.846599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.846616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.321 [2024-07-15 13:28:12.846632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.846648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.321 [2024-07-15 13:28:12.846662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.846678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.321 [2024-07-15 13:28:12.846692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.846709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.321 [2024-07-15 13:28:12.846723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.846740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.321 [2024-07-15 13:28:12.846754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.846802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.321 [2024-07-15 13:28:12.846826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.846847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.321 [2024-07-15 13:28:12.846864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.846882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.321 [2024-07-15 13:28:12.846898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.846916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.321 [2024-07-15 13:28:12.846931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.846948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.321 [2024-07-15 13:28:12.846964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.846982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.321 [2024-07-15 13:28:12.846997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.321 [2024-07-15 13:28:12.847016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.847978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.847993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.848011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.848027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.848044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.848059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.848077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.848092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.848110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.848126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.848144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.848160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.848178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.322 [2024-07-15 13:28:12.848194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.848229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.322 [2024-07-15 13:28:12.848245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.848263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.322 [2024-07-15 13:28:12.848279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.848297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.322 [2024-07-15 13:28:12.848311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.848329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.322 [2024-07-15 13:28:12.848344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.848362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.322 [2024-07-15 13:28:12.848385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.848403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.322 [2024-07-15 13:28:12.848418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.848436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.322 [2024-07-15 13:28:12.848451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.322 [2024-07-15 13:28:12.848470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.322 [2024-07-15 13:28:12.848486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.848503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.848518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.848535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.848551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.848569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.848584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.848602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.848617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.848635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.848651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.848668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.848683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.848701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.848716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.848733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.848749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.848766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.848782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.848800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.848816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.848834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.848849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.848867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.848882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.848900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.848915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.848933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.848948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.848965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.848982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.323 [2024-07-15 13:28:12.849933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.323 [2024-07-15 13:28:12.849948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.849966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.324 [2024-07-15 13:28:12.849982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.850010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.324 [2024-07-15 13:28:12.850026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.850048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.324 [2024-07-15 13:28:12.850064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.850081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.324 [2024-07-15 13:28:12.850096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.850114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.324 [2024-07-15 13:28:12.850130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.850147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.324 [2024-07-15 13:28:12.850163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.850181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.324 [2024-07-15 13:28:12.850195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.850229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.324 [2024-07-15 13:28:12.850246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.850264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.324 [2024-07-15 13:28:12.850279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.850297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.324 [2024-07-15 13:28:12.850313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.850330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.324 [2024-07-15 13:28:12.850346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.850364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.324 [2024-07-15 13:28:12.850379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.850397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.324 [2024-07-15 13:28:12.850412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.850430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.324 [2024-07-15 13:28:12.850445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.850462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.324 [2024-07-15 13:28:12.850478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.850496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.324 [2024-07-15 13:28:12.850511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.850529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.324 [2024-07-15 13:28:12.850544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.850567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.324 [2024-07-15 13:28:12.850584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.850601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.324 [2024-07-15 13:28:12.850616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.850633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.324 [2024-07-15 13:28:12.850649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.850687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.324 [2024-07-15 13:28:12.850704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.324 [2024-07-15 13:28:12.850718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79952 len:8 PRP1 0x0 PRP2 0x0 00:29:16.324 [2024-07-15 13:28:12.850733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.324 [2024-07-15 13:28:12.850823] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x625280 was disconnected and freed. reset controller. 00:29:16.324 [2024-07-15 13:28:12.851136] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.324 [2024-07-15 13:28:12.851271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1aa0 (9): Bad file descriptor 00:29:16.324 [2024-07-15 13:28:12.851423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.324 [2024-07-15 13:28:12.851453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1aa0 with addr=10.0.0.2, port=4420 00:29:16.324 [2024-07-15 13:28:12.851471] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1aa0 is same with the state(5) to be set 00:29:16.324 [2024-07-15 13:28:12.851499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1aa0 (9): Bad file descriptor 00:29:16.324 [2024-07-15 13:28:12.851525] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.324 [2024-07-15 13:28:12.851541] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.324 [2024-07-15 13:28:12.851557] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.324 [2024-07-15 13:28:12.851587] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.324 [2024-07-15 13:28:12.851605] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.324 13:28:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:29:17.258 [2024-07-15 13:28:13.851774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.258 [2024-07-15 13:28:13.851896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1aa0 with addr=10.0.0.2, port=4420 00:29:17.258 [2024-07-15 13:28:13.851919] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1aa0 is same with the state(5) to be set 00:29:17.258 [2024-07-15 13:28:13.852002] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1aa0 (9): Bad file descriptor 00:29:17.258 [2024-07-15 13:28:13.852029] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.258 [2024-07-15 13:28:13.852043] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.258 [2024-07-15 13:28:13.852059] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.258 [2024-07-15 13:28:13.852098] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.258 [2024-07-15 13:28:13.852116] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.192 [2024-07-15 13:28:14.852270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.192 [2024-07-15 13:28:14.852354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1aa0 with addr=10.0.0.2, port=4420 00:29:18.192 [2024-07-15 13:28:14.852383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1aa0 is same with the state(5) to be set 00:29:18.192 [2024-07-15 13:28:14.852418] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1aa0 (9): Bad file descriptor 00:29:18.192 [2024-07-15 13:28:14.852445] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.192 [2024-07-15 13:28:14.852459] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.192 [2024-07-15 13:28:14.852475] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.192 [2024-07-15 13:28:14.852516] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.192 [2024-07-15 13:28:14.852534] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.125 [2024-07-15 13:28:15.856165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.125 [2024-07-15 13:28:15.856258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e1aa0 with addr=10.0.0.2, port=4420 00:29:19.125 [2024-07-15 13:28:15.856290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1aa0 is same with the state(5) to be set 00:29:19.125 [2024-07-15 13:28:15.856586] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e1aa0 (9): Bad file descriptor 00:29:19.125 [2024-07-15 13:28:15.856888] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.125 [2024-07-15 13:28:15.856928] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.125 [2024-07-15 13:28:15.856948] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.125 [2024-07-15 13:28:15.860908] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.125 [2024-07-15 13:28:15.860950] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.383 13:28:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:19.641 [2024-07-15 13:28:16.168721] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.641 13:28:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 114991 00:29:20.208 [2024-07-15 13:28:16.899904] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:25.470 00:29:25.470 Latency(us) 00:29:25.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.470 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:25.470 Verification LBA range: start 0x0 length 0x4000 00:29:25.470 NVMe0n1 : 10.01 5235.99 20.45 3593.10 0.00 14467.29 644.19 3019898.88 00:29:25.470 =================================================================================================================== 00:29:25.470 Total : 5235.99 20.45 3593.10 0.00 14467.29 0.00 3019898.88 00:29:25.470 0 00:29:25.470 13:28:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 114836 00:29:25.470 13:28:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 114836 ']' 00:29:25.470 13:28:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 114836 00:29:25.470 13:28:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:29:25.470 13:28:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:25.470 13:28:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 114836 00:29:25.470 killing process with pid 114836 00:29:25.470 Received shutdown signal, test time was about 10.000000 seconds 00:29:25.470 00:29:25.470 Latency(us) 00:29:25.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.470 =================================================================================================================== 00:29:25.470 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:25.470 13:28:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:29:25.470 13:28:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:29:25.470 13:28:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 114836' 00:29:25.470 13:28:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 114836 00:29:25.470 13:28:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 114836 00:29:25.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:25.470 13:28:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=115112 00:29:25.470 13:28:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 115112 /var/tmp/bdevperf.sock 00:29:25.470 13:28:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:29:25.470 13:28:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 115112 ']' 00:29:25.470 13:28:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:25.470 13:28:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:25.470 13:28:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:25.470 13:28:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:25.470 13:28:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:25.470 [2024-07-15 13:28:22.030174] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:25.470 [2024-07-15 13:28:22.030273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115112 ] 00:29:25.470 [2024-07-15 13:28:22.160751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.777 [2024-07-15 13:28:22.243232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:26.358 13:28:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:26.358 13:28:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:29:26.358 13:28:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=115140 00:29:26.358 13:28:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 115112 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:29:26.358 13:28:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:29:26.617 13:28:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:29:27.191 NVMe0n1 00:29:27.191 13:28:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=115194 00:29:27.191 13:28:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:29:27.191 13:28:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:27.191 Running I/O for 10 seconds... 00:29:28.124 13:28:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:28.385 [2024-07-15 13:28:24.966200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.385 [2024-07-15 13:28:24.966590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.386 [2024-07-15 13:28:24.966612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.386 [2024-07-15 13:28:24.966620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.386 [2024-07-15 13:28:24.966627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7700 is same with the state(5) to be set 00:29:28.386 [2024-07-15 13:28:24.967227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.967271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.967303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.967319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.967340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.967355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.967373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.967387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.967404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.967420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.967438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.967453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.967471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.967486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.967504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.967521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.967538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.967554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.967587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.967601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.967618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.967634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.967651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.967666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.967683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.967713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.967747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.967762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.967778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.967792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.967808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.967823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.967839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.967856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.967873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.967888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.967905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.967920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.967936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.967967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.967984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.967999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.968017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.968033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.968051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.968066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.968083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.968099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.968116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.968131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.968147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:34200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.968164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.968182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.968196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.968230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.968248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.968266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.968301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.968322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.968338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.968356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.968372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.968389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.968406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.968423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.968439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.968457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.968473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.968491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.968507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.968526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.968541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.968560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.968577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.968609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.968639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.968657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.968672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.968688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.386 [2024-07-15 13:28:24.968703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.386 [2024-07-15 13:28:24.968719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.968734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.968750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.968765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.968782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.968796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.968813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.968828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.968844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.968859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.968875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.968890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.968906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.968921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.968937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.968952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.968969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.968983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.969978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.969996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.970011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.970028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.970043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.970062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.970093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.970113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.970129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.970148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.970164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.970197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.387 [2024-07-15 13:28:24.970212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.387 [2024-07-15 13:28:24.970245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.970261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.970279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.970309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.970331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.970348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.970366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.970383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.970400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.970416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.970435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.970452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.970473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.970489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.970507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.970523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.970541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:56304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.970557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.970575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.970628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.970646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.970660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.970677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.970693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.970710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.970724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.970741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.970756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.970804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.970820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.970838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.970854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.970871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.970887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.970905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.970921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.970940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.970957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.970976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.970992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:28016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.388 [2024-07-15 13:28:24.971819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.388 [2024-07-15 13:28:24.971837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.389 [2024-07-15 13:28:24.971852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.389 [2024-07-15 13:28:24.971870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.389 [2024-07-15 13:28:24.971886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.389 [2024-07-15 13:28:24.971903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.389 [2024-07-15 13:28:24.971919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.389 [2024-07-15 13:28:24.971954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:28.389 [2024-07-15 13:28:24.971970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:28.389 [2024-07-15 13:28:24.971983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70864 len:8 PRP1 0x0 PRP2 0x0 00:29:28.389 [2024-07-15 13:28:24.971998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.389 [2024-07-15 13:28:24.972067] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x88eec0 was disconnected and freed. reset controller. 00:29:28.389 [2024-07-15 13:28:24.972456] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.389 [2024-07-15 13:28:24.972581] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87bac0 (9): Bad file descriptor 00:29:28.389 [2024-07-15 13:28:24.972765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.389 [2024-07-15 13:28:24.972795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87bac0 with addr=10.0.0.2, port=4420 00:29:28.389 [2024-07-15 13:28:24.972814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87bac0 is same with the state(5) to be set 00:29:28.389 [2024-07-15 13:28:24.972841] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87bac0 (9): Bad file descriptor 00:29:28.389 [2024-07-15 13:28:24.972867] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.389 [2024-07-15 13:28:24.972882] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.389 [2024-07-15 13:28:24.972899] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.389 [2024-07-15 13:28:24.972928] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.389 [2024-07-15 13:28:24.972946] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.389 13:28:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 115194 00:29:30.289 [2024-07-15 13:28:26.973134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.289 [2024-07-15 13:28:26.973216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87bac0 with addr=10.0.0.2, port=4420 00:29:30.289 [2024-07-15 13:28:26.973245] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87bac0 is same with the state(5) to be set 00:29:30.289 [2024-07-15 13:28:26.973303] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87bac0 (9): Bad file descriptor 00:29:30.289 [2024-07-15 13:28:26.973338] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.289 [2024-07-15 13:28:26.973356] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.289 [2024-07-15 13:28:26.973375] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.289 [2024-07-15 13:28:26.973418] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.289 [2024-07-15 13:28:26.973438] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.816 [2024-07-15 13:28:28.973704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.816 [2024-07-15 13:28:28.973782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87bac0 with addr=10.0.0.2, port=4420 00:29:32.816 [2024-07-15 13:28:28.973808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87bac0 is same with the state(5) to be set 00:29:32.816 [2024-07-15 13:28:28.973848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87bac0 (9): Bad file descriptor 00:29:32.816 [2024-07-15 13:28:28.973896] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.816 [2024-07-15 13:28:28.973916] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.816 [2024-07-15 13:28:28.973934] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.816 [2024-07-15 13:28:28.973974] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.816 [2024-07-15 13:28:28.973994] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.716 [2024-07-15 13:28:30.974092] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.716 [2024-07-15 13:28:30.974156] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.716 [2024-07-15 13:28:30.974176] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.716 [2024-07-15 13:28:30.974191] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:29:34.716 [2024-07-15 13:28:30.974243] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.282 00:29:35.282 Latency(us) 00:29:35.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.282 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:29:35.282 NVMe0n1 : 8.21 2773.95 10.84 15.59 0.00 45821.88 2234.18 7015926.69 00:29:35.282 =================================================================================================================== 00:29:35.282 Total : 2773.95 10.84 15.59 0.00 45821.88 2234.18 7015926.69 00:29:35.282 0 00:29:35.282 13:28:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:35.282 Attaching 5 probes... 00:29:35.282 1394.932819: reset bdev controller NVMe0 00:29:35.282 1395.159529: reconnect bdev controller NVMe0 00:29:35.282 3395.472994: reconnect delay bdev controller NVMe0 00:29:35.282 3395.513287: reconnect bdev controller NVMe0 00:29:35.282 5395.970985: reconnect delay bdev controller NVMe0 00:29:35.282 5395.995277: reconnect bdev controller NVMe0 00:29:35.282 7396.530934: reconnect delay bdev controller NVMe0 00:29:35.282 7396.571553: reconnect bdev controller NVMe0 00:29:35.282 13:28:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:29:35.283 13:28:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:29:35.283 13:28:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 115140 00:29:35.283 13:28:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:35.283 13:28:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 115112 00:29:35.283 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 115112 ']' 00:29:35.283 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 115112 00:29:35.283 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:29:35.283 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:35.283 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 115112 00:29:35.540 killing process with pid 115112 00:29:35.540 Received shutdown signal, test time was about 8.266195 seconds 00:29:35.540 00:29:35.540 Latency(us) 00:29:35.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.540 =================================================================================================================== 00:29:35.540 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:35.540 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:29:35.540 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:29:35.540 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 115112' 00:29:35.540 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 115112 00:29:35.540 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 115112 00:29:35.540 13:28:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:36.105 rmmod nvme_tcp 00:29:36.105 rmmod nvme_fabrics 00:29:36.105 rmmod nvme_keyring 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 114540 ']' 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 114540 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 114540 ']' 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 114540 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 114540 00:29:36.105 killing process with pid 114540 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 114540' 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 114540 00:29:36.105 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 114540 00:29:36.363 13:28:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:36.363 13:28:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:36.363 13:28:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:36.363 13:28:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:36.363 13:28:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:36.363 13:28:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.363 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:36.363 13:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:36.363 13:28:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:36.363 00:29:36.363 real 0m47.602s 00:29:36.363 user 2m20.284s 00:29:36.363 sys 0m5.063s 00:29:36.363 13:28:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:36.363 13:28:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:36.363 ************************************ 00:29:36.363 END TEST nvmf_timeout 00:29:36.363 ************************************ 00:29:36.363 13:28:33 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:29:36.363 13:28:33 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:29:36.363 13:28:33 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:36.363 13:28:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:36.363 13:28:33 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:29:36.363 00:29:36.363 real 22m7.426s 00:29:36.363 user 66m38.574s 00:29:36.363 sys 4m34.425s 00:29:36.363 13:28:33 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:36.363 ************************************ 00:29:36.363 END TEST nvmf_tcp 00:29:36.363 13:28:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:36.363 ************************************ 00:29:36.621 13:28:33 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:36.621 13:28:33 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:36.621 13:28:33 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:36.621 13:28:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:36.621 13:28:33 -- common/autotest_common.sh@10 -- # set +x 00:29:36.621 ************************************ 00:29:36.621 START TEST spdkcli_nvmf_tcp 00:29:36.621 ************************************ 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:36.621 * Looking for test storage... 00:29:36.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=115413 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 115413 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 115413 ']' 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:36.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:36.621 13:28:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:36.621 [2024-07-15 13:28:33.297830] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:36.621 [2024-07-15 13:28:33.297918] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115413 ] 00:29:36.879 [2024-07-15 13:28:33.438708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:36.879 [2024-07-15 13:28:33.534303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.879 [2024-07-15 13:28:33.534313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.811 13:28:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:37.811 13:28:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:29:37.811 13:28:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:37.811 13:28:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:37.811 13:28:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:37.811 13:28:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:37.811 13:28:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:37.811 13:28:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:37.811 13:28:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:37.811 13:28:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:37.811 13:28:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:37.811 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:37.811 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:37.811 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:37.811 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:37.811 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:37.811 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:37.811 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:37.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:37.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:37.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:37.811 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:37.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:37.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:37.811 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:37.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:37.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:37.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:37.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:37.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:37.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:37.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:37.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:37.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:37.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:37.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:37.811 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:37.811 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:37.811 ' 00:29:40.410 [2024-07-15 13:28:37.007474] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.782 [2024-07-15 13:28:38.284602] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:44.309 [2024-07-15 13:28:40.642341] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:46.208 [2024-07-15 13:28:42.688106] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:47.580 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:47.580 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:47.580 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:47.580 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:47.580 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:47.580 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:47.580 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:47.580 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:47.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:47.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:47.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:47.580 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:47.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:47.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:47.580 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:47.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:47.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:47.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:47.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:47.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:47.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:47.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:47.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:47.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:47.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:47.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:47.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:47.580 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:47.838 13:28:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:47.838 13:28:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.838 13:28:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:47.838 13:28:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:47.838 13:28:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:47.838 13:28:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:47.838 13:28:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:47.838 13:28:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:29:48.096 13:28:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:48.354 13:28:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:48.354 13:28:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:48.354 13:28:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.354 13:28:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:48.354 13:28:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:48.354 13:28:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:48.354 13:28:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:48.354 13:28:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:48.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:48.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:48.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:48.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:48.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:48.354 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:48.354 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:48.354 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:48.354 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:48.354 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:48.354 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:48.354 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:48.355 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:48.355 ' 00:29:53.647 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:53.647 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:53.647 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:53.647 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:53.647 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:53.647 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:53.647 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:53.647 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:53.647 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:53.647 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:53.647 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:53.647 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:53.647 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:53.647 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:53.647 13:28:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:53.647 13:28:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:53.647 13:28:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:53.647 13:28:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 115413 00:29:53.647 13:28:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 115413 ']' 00:29:53.647 13:28:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 115413 00:29:53.647 13:28:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:29:53.647 13:28:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:53.647 13:28:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 115413 00:29:53.905 13:28:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:53.905 13:28:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:53.905 13:28:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 115413' 00:29:53.905 killing process with pid 115413 00:29:53.905 13:28:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 115413 00:29:53.905 13:28:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 115413 00:29:53.905 13:28:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:53.905 13:28:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:53.905 13:28:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 115413 ']' 00:29:53.905 Process with pid 115413 is not found 00:29:53.905 13:28:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 115413 00:29:53.905 13:28:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 115413 ']' 00:29:53.905 13:28:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 115413 00:29:53.905 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (115413) - No such process 00:29:53.905 13:28:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 115413 is not found' 00:29:53.905 13:28:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:53.905 13:28:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:53.905 13:28:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:53.905 00:29:53.905 real 0m17.472s 00:29:53.905 user 0m37.710s 00:29:53.905 sys 0m0.985s 00:29:53.905 13:28:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:53.905 13:28:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:53.905 ************************************ 00:29:53.905 END TEST spdkcli_nvmf_tcp 00:29:53.905 ************************************ 00:29:54.165 13:28:50 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:54.165 13:28:50 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:54.165 13:28:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:54.165 13:28:50 -- common/autotest_common.sh@10 -- # set +x 00:29:54.165 ************************************ 00:29:54.165 START TEST nvmf_identify_passthru 00:29:54.165 ************************************ 00:29:54.165 13:28:50 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:54.165 * Looking for test storage... 00:29:54.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:54.165 13:28:50 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:54.165 13:28:50 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.165 13:28:50 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.165 13:28:50 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.165 13:28:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.165 13:28:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.165 13:28:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.165 13:28:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:54.165 13:28:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:54.165 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:54.165 13:28:50 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:54.165 13:28:50 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.165 13:28:50 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.165 13:28:50 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.165 13:28:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.166 13:28:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.166 13:28:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.166 13:28:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:54.166 13:28:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.166 13:28:50 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.166 13:28:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:54.166 13:28:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:54.166 Cannot find device "nvmf_tgt_br" 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:54.166 Cannot find device "nvmf_tgt_br2" 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:54.166 Cannot find device "nvmf_tgt_br" 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:54.166 Cannot find device "nvmf_tgt_br2" 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:54.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:54.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:54.166 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:54.424 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:54.424 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:54.424 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:54.424 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:54.424 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:54.424 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:54.424 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:54.424 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:54.424 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:54.424 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:54.424 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:54.424 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:54.424 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:54.424 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:54.424 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:54.424 13:28:50 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:54.424 13:28:51 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:54.424 13:28:51 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:54.424 13:28:51 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:54.424 13:28:51 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:54.424 13:28:51 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:54.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:54.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:29:54.424 00:29:54.424 --- 10.0.0.2 ping statistics --- 00:29:54.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.424 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:29:54.424 13:28:51 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:54.424 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:54.424 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:29:54.424 00:29:54.424 --- 10.0.0.3 ping statistics --- 00:29:54.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.424 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:29:54.424 13:28:51 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:54.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:54.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:29:54.424 00:29:54.424 --- 10.0.0.1 ping statistics --- 00:29:54.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.424 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:29:54.424 13:28:51 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:54.424 13:28:51 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:29:54.424 13:28:51 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:54.424 13:28:51 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:54.424 13:28:51 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:54.424 13:28:51 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:54.424 13:28:51 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:54.424 13:28:51 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:54.424 13:28:51 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:54.424 13:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:54.424 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:54.424 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:54.424 13:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:54.424 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:29:54.424 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:29:54.424 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:29:54.424 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:29:54.424 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:54.424 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:29:54.424 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:54.424 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:54.424 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:29:54.424 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:29:54.424 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:29:54.424 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:29:54.424 13:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:29:54.424 13:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:29:54.424 13:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:29:54.424 13:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:54.424 13:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:54.683 13:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:29:54.683 13:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:29:54.683 13:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:54.683 13:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:54.941 13:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:29:54.941 13:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:54.941 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:54.941 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:54.941 13:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:54.941 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:54.941 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:54.941 13:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=115907 00:29:54.941 13:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:54.941 13:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:54.941 13:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 115907 00:29:54.941 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 115907 ']' 00:29:54.941 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.941 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:54.941 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.941 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:54.941 13:28:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:54.941 [2024-07-15 13:28:51.599294] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:54.941 [2024-07-15 13:28:51.599398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:55.199 [2024-07-15 13:28:51.736745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:55.199 [2024-07-15 13:28:51.838943] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:55.199 [2024-07-15 13:28:51.839006] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:55.199 [2024-07-15 13:28:51.839020] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:55.199 [2024-07-15 13:28:51.839031] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:55.199 [2024-07-15 13:28:51.839040] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:55.199 [2024-07-15 13:28:51.839145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.199 [2024-07-15 13:28:51.839871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:55.199 [2024-07-15 13:28:51.841244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:55.199 [2024-07-15 13:28:51.841312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.132 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:56.132 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:29:56.132 13:28:52 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:56.132 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.132 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.132 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.132 13:28:52 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:56.132 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.132 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.132 [2024-07-15 13:28:52.741639] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:56.132 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.132 13:28:52 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:56.132 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.132 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.132 [2024-07-15 13:28:52.755738] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.132 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.132 13:28:52 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:56.132 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:56.132 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.132 13:28:52 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:29:56.132 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.132 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.390 Nvme0n1 00:29:56.390 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.390 13:28:52 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:56.390 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.391 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.391 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.391 13:28:52 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:56.391 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.391 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.391 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.391 13:28:52 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:56.391 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.391 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.391 [2024-07-15 13:28:52.895827] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.391 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.391 13:28:52 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:56.391 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.391 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.391 [ 00:29:56.391 { 00:29:56.391 "allow_any_host": true, 00:29:56.391 "hosts": [], 00:29:56.391 "listen_addresses": [], 00:29:56.391 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:56.391 "subtype": "Discovery" 00:29:56.391 }, 00:29:56.391 { 00:29:56.391 "allow_any_host": true, 00:29:56.391 "hosts": [], 00:29:56.391 "listen_addresses": [ 00:29:56.391 { 00:29:56.391 "adrfam": "IPv4", 00:29:56.391 "traddr": "10.0.0.2", 00:29:56.391 "trsvcid": "4420", 00:29:56.391 "trtype": "TCP" 00:29:56.391 } 00:29:56.391 ], 00:29:56.391 "max_cntlid": 65519, 00:29:56.391 "max_namespaces": 1, 00:29:56.391 "min_cntlid": 1, 00:29:56.391 "model_number": "SPDK bdev Controller", 00:29:56.391 "namespaces": [ 00:29:56.391 { 00:29:56.391 "bdev_name": "Nvme0n1", 00:29:56.391 "name": "Nvme0n1", 00:29:56.391 "nguid": "1E7649DE01F84805B534CBF21A1D1E2F", 00:29:56.391 "nsid": 1, 00:29:56.391 "uuid": "1e7649de-01f8-4805-b534-cbf21a1d1e2f" 00:29:56.391 } 00:29:56.391 ], 00:29:56.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:56.391 "serial_number": "SPDK00000000000001", 00:29:56.391 "subtype": "NVMe" 00:29:56.391 } 00:29:56.391 ] 00:29:56.391 13:28:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.391 13:28:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:56.391 13:28:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:56.391 13:28:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:56.649 13:28:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:29:56.649 13:28:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:56.649 13:28:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:56.649 13:28:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:56.649 13:28:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:29:56.649 13:28:53 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:29:56.649 13:28:53 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:29:56.649 13:28:53 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:56.649 13:28:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.649 13:28:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.649 13:28:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.649 13:28:53 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:56.649 13:28:53 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:56.649 13:28:53 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:56.649 13:28:53 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:56.906 13:28:53 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:56.906 13:28:53 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:56.906 13:28:53 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:56.906 13:28:53 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:56.906 rmmod nvme_tcp 00:29:56.906 rmmod nvme_fabrics 00:29:56.906 rmmod nvme_keyring 00:29:56.906 13:28:53 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:56.906 13:28:53 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:56.906 13:28:53 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:56.906 13:28:53 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 115907 ']' 00:29:56.906 13:28:53 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 115907 00:29:56.906 13:28:53 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 115907 ']' 00:29:56.906 13:28:53 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 115907 00:29:56.906 13:28:53 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:29:56.906 13:28:53 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:56.906 13:28:53 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 115907 00:29:56.906 13:28:53 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:56.907 13:28:53 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:56.907 killing process with pid 115907 00:29:56.907 13:28:53 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 115907' 00:29:56.907 13:28:53 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 115907 00:29:56.907 13:28:53 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 115907 00:29:57.165 13:28:53 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:57.165 13:28:53 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:57.165 13:28:53 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:57.165 13:28:53 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:57.165 13:28:53 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:57.165 13:28:53 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.165 13:28:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:57.165 13:28:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.165 13:28:53 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:57.165 00:29:57.165 real 0m3.092s 00:29:57.165 user 0m7.922s 00:29:57.165 sys 0m0.795s 00:29:57.165 13:28:53 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:57.165 ************************************ 00:29:57.165 13:28:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:57.165 END TEST nvmf_identify_passthru 00:29:57.165 ************************************ 00:29:57.165 13:28:53 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:57.165 13:28:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:57.165 13:28:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:57.165 13:28:53 -- common/autotest_common.sh@10 -- # set +x 00:29:57.165 ************************************ 00:29:57.165 START TEST nvmf_dif 00:29:57.165 ************************************ 00:29:57.165 13:28:53 nvmf_dif -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:57.165 * Looking for test storage... 00:29:57.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:57.165 13:28:53 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:57.165 13:28:53 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.165 13:28:53 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.165 13:28:53 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.165 13:28:53 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.165 13:28:53 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.165 13:28:53 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.165 13:28:53 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:57.165 13:28:53 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:57.165 13:28:53 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:57.165 13:28:53 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:57.165 13:28:53 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:57.165 13:28:53 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:57.165 13:28:53 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:57.165 13:28:53 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.165 13:28:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:57.165 13:28:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:57.424 Cannot find device "nvmf_tgt_br" 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@155 -- # true 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:57.424 Cannot find device "nvmf_tgt_br2" 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@156 -- # true 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:57.424 Cannot find device "nvmf_tgt_br" 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@158 -- # true 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:57.424 Cannot find device "nvmf_tgt_br2" 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@159 -- # true 00:29:57.424 13:28:53 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:57.424 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@162 -- # true 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:57.424 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@163 -- # true 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:57.424 13:28:54 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:57.682 13:28:54 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:57.682 13:28:54 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:57.682 13:28:54 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:57.682 13:28:54 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:57.682 13:28:54 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:57.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:57.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:29:57.682 00:29:57.682 --- 10.0.0.2 ping statistics --- 00:29:57.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.682 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:29:57.682 13:28:54 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:57.682 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:57.682 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:29:57.682 00:29:57.682 --- 10.0.0.3 ping statistics --- 00:29:57.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.682 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:29:57.682 13:28:54 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:57.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:57.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:29:57.682 00:29:57.682 --- 10.0.0.1 ping statistics --- 00:29:57.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.682 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:29:57.682 13:28:54 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:57.682 13:28:54 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:29:57.682 13:28:54 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:57.682 13:28:54 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:57.940 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:57.940 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:57.940 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:57.940 13:28:54 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:57.940 13:28:54 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:57.940 13:28:54 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:57.940 13:28:54 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:57.940 13:28:54 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:57.940 13:28:54 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:57.940 13:28:54 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:57.940 13:28:54 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:57.940 13:28:54 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:57.940 13:28:54 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:57.940 13:28:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:57.940 13:28:54 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=116250 00:29:57.940 13:28:54 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 116250 00:29:57.940 13:28:54 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:57.940 13:28:54 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 116250 ']' 00:29:57.940 13:28:54 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.940 13:28:54 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:57.940 13:28:54 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.940 13:28:54 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:57.940 13:28:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:58.198 [2024-07-15 13:28:54.690888] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:58.198 [2024-07-15 13:28:54.690992] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.198 [2024-07-15 13:28:54.833571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.198 [2024-07-15 13:28:54.929833] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.198 [2024-07-15 13:28:54.929910] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.198 [2024-07-15 13:28:54.929925] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.198 [2024-07-15 13:28:54.929936] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.198 [2024-07-15 13:28:54.929945] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.198 [2024-07-15 13:28:54.929976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.131 13:28:55 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:59.131 13:28:55 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:29:59.131 13:28:55 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:59.131 13:28:55 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:59.131 13:28:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:59.131 13:28:55 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.131 13:28:55 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:59.131 13:28:55 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:59.131 13:28:55 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.131 13:28:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:59.131 [2024-07-15 13:28:55.686177] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.131 13:28:55 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.131 13:28:55 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:59.131 13:28:55 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:59.131 13:28:55 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:59.131 13:28:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:59.131 ************************************ 00:29:59.131 START TEST fio_dif_1_default 00:29:59.131 ************************************ 00:29:59.131 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:29:59.131 13:28:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:59.131 13:28:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:59.131 13:28:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:59.131 13:28:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:59.131 13:28:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:59.131 13:28:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:59.131 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.131 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:59.131 bdev_null0 00:29:59.131 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.131 13:28:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:59.131 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.131 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:59.131 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.131 13:28:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:59.132 [2024-07-15 13:28:55.730331] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:59.132 { 00:29:59.132 "params": { 00:29:59.132 "name": "Nvme$subsystem", 00:29:59.132 "trtype": "$TEST_TRANSPORT", 00:29:59.132 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:59.132 "adrfam": "ipv4", 00:29:59.132 "trsvcid": "$NVMF_PORT", 00:29:59.132 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:59.132 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:59.132 "hdgst": ${hdgst:-false}, 00:29:59.132 "ddgst": ${ddgst:-false} 00:29:59.132 }, 00:29:59.132 "method": "bdev_nvme_attach_controller" 00:29:59.132 } 00:29:59.132 EOF 00:29:59.132 )") 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:59.132 "params": { 00:29:59.132 "name": "Nvme0", 00:29:59.132 "trtype": "tcp", 00:29:59.132 "traddr": "10.0.0.2", 00:29:59.132 "adrfam": "ipv4", 00:29:59.132 "trsvcid": "4420", 00:29:59.132 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:59.132 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:59.132 "hdgst": false, 00:29:59.132 "ddgst": false 00:29:59.132 }, 00:29:59.132 "method": "bdev_nvme_attach_controller" 00:29:59.132 }' 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:59.132 13:28:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:59.390 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:59.390 fio-3.35 00:29:59.390 Starting 1 thread 00:30:11.591 00:30:11.591 filename0: (groupid=0, jobs=1): err= 0: pid=116339: Mon Jul 15 13:29:06 2024 00:30:11.591 read: IOPS=1264, BW=5059KiB/s (5180kB/s)(49.4MiB/10001msec) 00:30:11.591 slat (nsec): min=6225, max=54113, avg=8171.86, stdev=3505.47 00:30:11.591 clat (usec): min=375, max=41969, avg=3137.96, stdev=10105.87 00:30:11.591 lat (usec): min=382, max=41980, avg=3146.13, stdev=10105.97 00:30:11.591 clat percentiles (usec): 00:30:11.591 | 1.00th=[ 388], 5.00th=[ 392], 10.00th=[ 400], 20.00th=[ 408], 00:30:11.591 | 30.00th=[ 416], 40.00th=[ 424], 50.00th=[ 433], 60.00th=[ 441], 00:30:11.591 | 70.00th=[ 449], 80.00th=[ 465], 90.00th=[ 510], 95.00th=[40633], 00:30:11.591 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:30:11.591 | 99.99th=[42206] 00:30:11.591 bw ( KiB/s): min= 2368, max= 8736, per=100.00%, avg=5102.53, stdev=1787.88, samples=19 00:30:11.591 iops : min= 592, max= 2184, avg=1275.63, stdev=446.97, samples=19 00:30:11.591 lat (usec) : 500=89.00%, 750=4.26% 00:30:11.591 lat (msec) : 2=0.03%, 10=0.03%, 50=6.67% 00:30:11.591 cpu : usr=90.97%, sys=8.26%, ctx=19, majf=0, minf=0 00:30:11.591 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:11.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.591 issued rwts: total=12648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.591 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:11.591 00:30:11.591 Run status group 0 (all jobs): 00:30:11.591 READ: bw=5059KiB/s (5180kB/s), 5059KiB/s-5059KiB/s (5180kB/s-5180kB/s), io=49.4MiB (51.8MB), run=10001-10001msec 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:11.591 ************************************ 00:30:11.591 END TEST fio_dif_1_default 00:30:11.591 ************************************ 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.591 00:30:11.591 real 0m10.953s 00:30:11.591 user 0m9.707s 00:30:11.591 sys 0m1.080s 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:11.591 13:29:06 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:11.591 13:29:06 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:11.591 13:29:06 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:11.591 13:29:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:11.591 ************************************ 00:30:11.591 START TEST fio_dif_1_multi_subsystems 00:30:11.591 ************************************ 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.591 bdev_null0 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.591 [2024-07-15 13:29:06.737746] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.591 bdev_null1 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:11.591 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:11.591 { 00:30:11.591 "params": { 00:30:11.591 "name": "Nvme$subsystem", 00:30:11.591 "trtype": "$TEST_TRANSPORT", 00:30:11.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.591 "adrfam": "ipv4", 00:30:11.592 "trsvcid": "$NVMF_PORT", 00:30:11.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.592 "hdgst": ${hdgst:-false}, 00:30:11.592 "ddgst": ${ddgst:-false} 00:30:11.592 }, 00:30:11.592 "method": "bdev_nvme_attach_controller" 00:30:11.592 } 00:30:11.592 EOF 00:30:11.592 )") 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:11.592 { 00:30:11.592 "params": { 00:30:11.592 "name": "Nvme$subsystem", 00:30:11.592 "trtype": "$TEST_TRANSPORT", 00:30:11.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.592 "adrfam": "ipv4", 00:30:11.592 "trsvcid": "$NVMF_PORT", 00:30:11.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.592 "hdgst": ${hdgst:-false}, 00:30:11.592 "ddgst": ${ddgst:-false} 00:30:11.592 }, 00:30:11.592 "method": "bdev_nvme_attach_controller" 00:30:11.592 } 00:30:11.592 EOF 00:30:11.592 )") 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:11.592 "params": { 00:30:11.592 "name": "Nvme0", 00:30:11.592 "trtype": "tcp", 00:30:11.592 "traddr": "10.0.0.2", 00:30:11.592 "adrfam": "ipv4", 00:30:11.592 "trsvcid": "4420", 00:30:11.592 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:11.592 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:11.592 "hdgst": false, 00:30:11.592 "ddgst": false 00:30:11.592 }, 00:30:11.592 "method": "bdev_nvme_attach_controller" 00:30:11.592 },{ 00:30:11.592 "params": { 00:30:11.592 "name": "Nvme1", 00:30:11.592 "trtype": "tcp", 00:30:11.592 "traddr": "10.0.0.2", 00:30:11.592 "adrfam": "ipv4", 00:30:11.592 "trsvcid": "4420", 00:30:11.592 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:11.592 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:11.592 "hdgst": false, 00:30:11.592 "ddgst": false 00:30:11.592 }, 00:30:11.592 "method": "bdev_nvme_attach_controller" 00:30:11.592 }' 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:11.592 13:29:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:11.592 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:11.592 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:11.592 fio-3.35 00:30:11.592 Starting 2 threads 00:30:21.573 00:30:21.573 filename0: (groupid=0, jobs=1): err= 0: pid=116494: Mon Jul 15 13:29:17 2024 00:30:21.573 read: IOPS=224, BW=897KiB/s (918kB/s)(8976KiB/10011msec) 00:30:21.573 slat (nsec): min=6406, max=43940, avg=9253.10, stdev=4555.44 00:30:21.573 clat (usec): min=395, max=42013, avg=17816.45, stdev=20017.89 00:30:21.573 lat (usec): min=401, max=42027, avg=17825.71, stdev=20017.77 00:30:21.573 clat percentiles (usec): 00:30:21.573 | 1.00th=[ 408], 5.00th=[ 424], 10.00th=[ 437], 20.00th=[ 453], 00:30:21.573 | 30.00th=[ 474], 40.00th=[ 494], 50.00th=[ 611], 60.00th=[40633], 00:30:21.573 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:21.573 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:30:21.573 | 99.99th=[42206] 00:30:21.573 bw ( KiB/s): min= 512, max= 1216, per=51.24%, avg=896.00, stdev=189.46, samples=20 00:30:21.573 iops : min= 128, max= 304, avg=224.00, stdev=47.36, samples=20 00:30:21.573 lat (usec) : 500=42.02%, 750=9.49%, 1000=5.53% 00:30:21.573 lat (msec) : 2=0.18%, 50=42.78% 00:30:21.573 cpu : usr=95.58%, sys=4.03%, ctx=7, majf=0, minf=9 00:30:21.573 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:21.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.573 issued rwts: total=2244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.573 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:21.573 filename1: (groupid=0, jobs=1): err= 0: pid=116495: Mon Jul 15 13:29:17 2024 00:30:21.573 read: IOPS=213, BW=852KiB/s (873kB/s)(8528KiB/10006msec) 00:30:21.573 slat (nsec): min=6438, max=52270, avg=9682.08, stdev=5984.72 00:30:21.573 clat (usec): min=396, max=42884, avg=18740.51, stdev=20153.65 00:30:21.573 lat (usec): min=403, max=42915, avg=18750.19, stdev=20153.67 00:30:21.573 clat percentiles (usec): 00:30:21.573 | 1.00th=[ 412], 5.00th=[ 429], 10.00th=[ 437], 20.00th=[ 457], 00:30:21.573 | 30.00th=[ 474], 40.00th=[ 502], 50.00th=[ 742], 60.00th=[40633], 00:30:21.573 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:30:21.573 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:30:21.573 | 99.99th=[42730] 00:30:21.573 bw ( KiB/s): min= 576, max= 1184, per=49.36%, avg=864.00, stdev=176.89, samples=19 00:30:21.573 iops : min= 144, max= 296, avg=216.00, stdev=44.22, samples=19 00:30:21.573 lat (usec) : 500=39.96%, 750=10.27%, 1000=4.55% 00:30:21.573 lat (msec) : 2=0.19%, 50=45.03% 00:30:21.573 cpu : usr=94.67%, sys=4.59%, ctx=103, majf=0, minf=0 00:30:21.573 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:21.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.573 issued rwts: total=2132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.573 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:21.573 00:30:21.573 Run status group 0 (all jobs): 00:30:21.573 READ: bw=1748KiB/s (1790kB/s), 852KiB/s-897KiB/s (873kB/s-918kB/s), io=17.1MiB (17.9MB), run=10006-10011msec 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:21.573 ************************************ 00:30:21.573 END TEST fio_dif_1_multi_subsystems 00:30:21.573 ************************************ 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.573 00:30:21.573 real 0m11.127s 00:30:21.573 user 0m19.807s 00:30:21.573 sys 0m1.120s 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:21.573 13:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:21.573 13:29:17 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:21.573 13:29:17 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:21.573 13:29:17 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:21.573 13:29:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:21.573 ************************************ 00:30:21.573 START TEST fio_dif_rand_params 00:30:21.573 ************************************ 00:30:21.573 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:30:21.573 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:21.573 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:21.573 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:21.573 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:21.573 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:21.573 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:21.573 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:21.573 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:21.573 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:21.573 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:21.573 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:21.573 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.574 bdev_null0 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.574 [2024-07-15 13:29:17.917563] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:21.574 { 00:30:21.574 "params": { 00:30:21.574 "name": "Nvme$subsystem", 00:30:21.574 "trtype": "$TEST_TRANSPORT", 00:30:21.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:21.574 "adrfam": "ipv4", 00:30:21.574 "trsvcid": "$NVMF_PORT", 00:30:21.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:21.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:21.574 "hdgst": ${hdgst:-false}, 00:30:21.574 "ddgst": ${ddgst:-false} 00:30:21.574 }, 00:30:21.574 "method": "bdev_nvme_attach_controller" 00:30:21.574 } 00:30:21.574 EOF 00:30:21.574 )") 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:21.574 "params": { 00:30:21.574 "name": "Nvme0", 00:30:21.574 "trtype": "tcp", 00:30:21.574 "traddr": "10.0.0.2", 00:30:21.574 "adrfam": "ipv4", 00:30:21.574 "trsvcid": "4420", 00:30:21.574 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:21.574 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:21.574 "hdgst": false, 00:30:21.574 "ddgst": false 00:30:21.574 }, 00:30:21.574 "method": "bdev_nvme_attach_controller" 00:30:21.574 }' 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:21.574 13:29:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:21.574 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:21.574 ... 00:30:21.574 fio-3.35 00:30:21.574 Starting 3 threads 00:30:28.140 00:30:28.140 filename0: (groupid=0, jobs=1): err= 0: pid=116646: Mon Jul 15 13:29:23 2024 00:30:28.140 read: IOPS=253, BW=31.7MiB/s (33.3MB/s)(159MiB/5003msec) 00:30:28.140 slat (nsec): min=6705, max=53775, avg=10990.76, stdev=4673.43 00:30:28.140 clat (usec): min=6439, max=52357, avg=11795.88, stdev=2998.03 00:30:28.140 lat (usec): min=6447, max=52365, avg=11806.87, stdev=2998.03 00:30:28.140 clat percentiles (usec): 00:30:28.140 | 1.00th=[ 7111], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11076], 00:30:28.140 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11731], 60.00th=[11994], 00:30:28.140 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12649], 95.00th=[13042], 00:30:28.140 | 99.00th=[13829], 99.50th=[14746], 99.90th=[52167], 99.95th=[52167], 00:30:28.140 | 99.99th=[52167] 00:30:28.140 bw ( KiB/s): min=29952, max=35328, per=34.31%, avg=32540.44, stdev=1580.40, samples=9 00:30:28.140 iops : min= 234, max= 276, avg=254.22, stdev=12.35, samples=9 00:30:28.140 lat (msec) : 10=5.35%, 20=94.17%, 100=0.47% 00:30:28.140 cpu : usr=91.86%, sys=6.74%, ctx=12, majf=0, minf=0 00:30:28.140 IO depths : 1=9.8%, 2=90.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.140 issued rwts: total=1270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.140 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:28.140 filename0: (groupid=0, jobs=1): err= 0: pid=116647: Mon Jul 15 13:29:23 2024 00:30:28.140 read: IOPS=279, BW=35.0MiB/s (36.7MB/s)(175MiB/5006msec) 00:30:28.140 slat (nsec): min=6863, max=41694, avg=11973.18, stdev=4179.46 00:30:28.140 clat (usec): min=5537, max=52881, avg=10701.75, stdev=3393.45 00:30:28.140 lat (usec): min=5572, max=52896, avg=10713.72, stdev=3393.54 00:30:28.140 clat percentiles (usec): 00:30:28.140 | 1.00th=[ 7177], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9765], 00:30:28.140 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:30:28.140 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11469], 95.00th=[11863], 00:30:28.140 | 99.00th=[12518], 99.50th=[50594], 99.90th=[52691], 99.95th=[52691], 00:30:28.140 | 99.99th=[52691] 00:30:28.140 bw ( KiB/s): min=33024, max=38400, per=37.74%, avg=35788.80, stdev=1710.08, samples=10 00:30:28.140 iops : min= 258, max= 300, avg=279.60, stdev=13.36, samples=10 00:30:28.140 lat (msec) : 10=27.19%, 20=72.16%, 50=0.07%, 100=0.57% 00:30:28.140 cpu : usr=91.67%, sys=6.83%, ctx=8, majf=0, minf=0 00:30:28.140 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.140 issued rwts: total=1401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.140 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:28.140 filename0: (groupid=0, jobs=1): err= 0: pid=116648: Mon Jul 15 13:29:23 2024 00:30:28.140 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(130MiB/5001msec) 00:30:28.140 slat (nsec): min=6726, max=55177, avg=10441.25, stdev=4801.69 00:30:28.140 clat (usec): min=8083, max=17632, avg=14426.57, stdev=1597.40 00:30:28.140 lat (usec): min=8090, max=17640, avg=14437.01, stdev=1597.61 00:30:28.140 clat percentiles (usec): 00:30:28.140 | 1.00th=[ 8455], 5.00th=[ 9896], 10.00th=[13304], 20.00th=[13829], 00:30:28.140 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14746], 60.00th=[15008], 00:30:28.140 | 70.00th=[15270], 80.00th=[15533], 90.00th=[15795], 95.00th=[16188], 00:30:28.140 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17433], 99.95th=[17695], 00:30:28.140 | 99.99th=[17695] 00:30:28.140 bw ( KiB/s): min=25344, max=27648, per=28.16%, avg=26709.33, stdev=746.36, samples=9 00:30:28.141 iops : min= 198, max= 216, avg=208.67, stdev= 5.83, samples=9 00:30:28.141 lat (msec) : 10=5.59%, 20=94.41% 00:30:28.141 cpu : usr=93.02%, sys=5.58%, ctx=11, majf=0, minf=0 00:30:28.141 IO depths : 1=33.1%, 2=66.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.141 issued rwts: total=1038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.141 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:28.141 00:30:28.141 Run status group 0 (all jobs): 00:30:28.141 READ: bw=92.6MiB/s (97.1MB/s), 25.9MiB/s-35.0MiB/s (27.2MB/s-36.7MB/s), io=464MiB (486MB), run=5001-5006msec 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.141 bdev_null0 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.141 [2024-07-15 13:29:23.892308] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.141 bdev_null1 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.141 bdev_null2 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:28.141 { 00:30:28.141 "params": { 00:30:28.141 "name": "Nvme$subsystem", 00:30:28.141 "trtype": "$TEST_TRANSPORT", 00:30:28.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.141 "adrfam": "ipv4", 00:30:28.141 "trsvcid": "$NVMF_PORT", 00:30:28.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.141 "hdgst": ${hdgst:-false}, 00:30:28.141 "ddgst": ${ddgst:-false} 00:30:28.141 }, 00:30:28.141 "method": "bdev_nvme_attach_controller" 00:30:28.141 } 00:30:28.141 EOF 00:30:28.141 )") 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:28.141 { 00:30:28.141 "params": { 00:30:28.141 "name": "Nvme$subsystem", 00:30:28.141 "trtype": "$TEST_TRANSPORT", 00:30:28.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.141 "adrfam": "ipv4", 00:30:28.141 "trsvcid": "$NVMF_PORT", 00:30:28.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.141 "hdgst": ${hdgst:-false}, 00:30:28.141 "ddgst": ${ddgst:-false} 00:30:28.141 }, 00:30:28.141 "method": "bdev_nvme_attach_controller" 00:30:28.141 } 00:30:28.141 EOF 00:30:28.141 )") 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:28.141 { 00:30:28.141 "params": { 00:30:28.141 "name": "Nvme$subsystem", 00:30:28.141 "trtype": "$TEST_TRANSPORT", 00:30:28.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.141 "adrfam": "ipv4", 00:30:28.141 "trsvcid": "$NVMF_PORT", 00:30:28.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.141 "hdgst": ${hdgst:-false}, 00:30:28.141 "ddgst": ${ddgst:-false} 00:30:28.141 }, 00:30:28.141 "method": "bdev_nvme_attach_controller" 00:30:28.141 } 00:30:28.141 EOF 00:30:28.141 )") 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:28.141 "params": { 00:30:28.141 "name": "Nvme0", 00:30:28.141 "trtype": "tcp", 00:30:28.141 "traddr": "10.0.0.2", 00:30:28.141 "adrfam": "ipv4", 00:30:28.141 "trsvcid": "4420", 00:30:28.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:28.141 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:28.141 "hdgst": false, 00:30:28.141 "ddgst": false 00:30:28.141 }, 00:30:28.141 "method": "bdev_nvme_attach_controller" 00:30:28.141 },{ 00:30:28.141 "params": { 00:30:28.141 "name": "Nvme1", 00:30:28.141 "trtype": "tcp", 00:30:28.141 "traddr": "10.0.0.2", 00:30:28.141 "adrfam": "ipv4", 00:30:28.141 "trsvcid": "4420", 00:30:28.141 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:28.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:28.141 "hdgst": false, 00:30:28.141 "ddgst": false 00:30:28.141 }, 00:30:28.141 "method": "bdev_nvme_attach_controller" 00:30:28.141 },{ 00:30:28.141 "params": { 00:30:28.141 "name": "Nvme2", 00:30:28.141 "trtype": "tcp", 00:30:28.141 "traddr": "10.0.0.2", 00:30:28.141 "adrfam": "ipv4", 00:30:28.141 "trsvcid": "4420", 00:30:28.141 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:28.141 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:28.141 "hdgst": false, 00:30:28.141 "ddgst": false 00:30:28.141 }, 00:30:28.141 "method": "bdev_nvme_attach_controller" 00:30:28.141 }' 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:28.141 13:29:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:28.141 13:29:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:28.141 13:29:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:28.141 13:29:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:28.141 13:29:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:28.141 13:29:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:28.141 13:29:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:28.141 13:29:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:28.141 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:28.141 ... 00:30:28.141 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:28.141 ... 00:30:28.141 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:28.141 ... 00:30:28.141 fio-3.35 00:30:28.141 Starting 24 threads 00:30:40.341 00:30:40.341 filename0: (groupid=0, jobs=1): err= 0: pid=116743: Mon Jul 15 13:29:35 2024 00:30:40.341 read: IOPS=215, BW=861KiB/s (881kB/s)(8660KiB/10063msec) 00:30:40.341 slat (usec): min=3, max=4646, avg=17.69, stdev=163.87 00:30:40.341 clat (msec): min=22, max=163, avg=74.20, stdev=27.23 00:30:40.341 lat (msec): min=22, max=163, avg=74.22, stdev=27.22 00:30:40.341 clat percentiles (msec): 00:30:40.341 | 1.00th=[ 26], 5.00th=[ 39], 10.00th=[ 43], 20.00th=[ 50], 00:30:40.341 | 30.00th=[ 56], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 75], 00:30:40.341 | 70.00th=[ 86], 80.00th=[ 97], 90.00th=[ 109], 95.00th=[ 127], 00:30:40.341 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 165], 99.95th=[ 165], 00:30:40.341 | 99.99th=[ 165] 00:30:40.341 bw ( KiB/s): min= 512, max= 1200, per=4.36%, avg=859.15, stdev=200.69, samples=20 00:30:40.341 iops : min= 128, max= 300, avg=214.70, stdev=50.11, samples=20 00:30:40.341 lat (msec) : 50=20.51%, 100=61.52%, 250=17.97% 00:30:40.341 cpu : usr=43.33%, sys=0.97%, ctx=1322, majf=0, minf=9 00:30:40.341 IO depths : 1=1.5%, 2=3.0%, 4=10.7%, 8=72.4%, 16=12.3%, 32=0.0%, >=64=0.0% 00:30:40.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.341 complete : 0=0.0%, 4=90.5%, 8=5.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.341 issued rwts: total=2165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.341 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.341 filename0: (groupid=0, jobs=1): err= 0: pid=116744: Mon Jul 15 13:29:35 2024 00:30:40.341 read: IOPS=180, BW=722KiB/s (739kB/s)(7244KiB/10035msec) 00:30:40.341 slat (usec): min=4, max=8027, avg=25.14, stdev=325.85 00:30:40.341 clat (msec): min=23, max=191, avg=88.48, stdev=31.95 00:30:40.341 lat (msec): min=23, max=191, avg=88.51, stdev=31.93 00:30:40.341 clat percentiles (msec): 00:30:40.341 | 1.00th=[ 24], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 64], 00:30:40.341 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 85], 60.00th=[ 94], 00:30:40.341 | 70.00th=[ 99], 80.00th=[ 110], 90.00th=[ 132], 95.00th=[ 157], 00:30:40.341 | 99.00th=[ 178], 99.50th=[ 188], 99.90th=[ 192], 99.95th=[ 192], 00:30:40.341 | 99.99th=[ 192] 00:30:40.341 bw ( KiB/s): min= 432, max= 1104, per=3.64%, avg=717.35, stdev=181.32, samples=20 00:30:40.341 iops : min= 108, max= 276, avg=179.25, stdev=45.31, samples=20 00:30:40.341 lat (msec) : 50=10.66%, 100=61.02%, 250=28.33% 00:30:40.341 cpu : usr=32.36%, sys=0.75%, ctx=880, majf=0, minf=9 00:30:40.341 IO depths : 1=2.3%, 2=4.9%, 4=13.7%, 8=68.2%, 16=10.9%, 32=0.0%, >=64=0.0% 00:30:40.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.341 complete : 0=0.0%, 4=91.1%, 8=3.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.341 issued rwts: total=1811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.341 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.341 filename0: (groupid=0, jobs=1): err= 0: pid=116745: Mon Jul 15 13:29:35 2024 00:30:40.341 read: IOPS=177, BW=712KiB/s (729kB/s)(7140KiB/10033msec) 00:30:40.341 slat (usec): min=3, max=8038, avg=27.17, stdev=341.96 00:30:40.341 clat (msec): min=22, max=160, avg=89.78, stdev=27.60 00:30:40.341 lat (msec): min=22, max=160, avg=89.81, stdev=27.61 00:30:40.341 clat percentiles (msec): 00:30:40.341 | 1.00th=[ 24], 5.00th=[ 45], 10.00th=[ 60], 20.00th=[ 71], 00:30:40.341 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 88], 60.00th=[ 97], 00:30:40.341 | 70.00th=[ 106], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 140], 00:30:40.341 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 161], 99.95th=[ 161], 00:30:40.341 | 99.99th=[ 161] 00:30:40.341 bw ( KiB/s): min= 512, max= 1176, per=3.59%, avg=707.10, stdev=146.74, samples=20 00:30:40.341 iops : min= 128, max= 294, avg=176.70, stdev=36.68, samples=20 00:30:40.341 lat (msec) : 50=7.79%, 100=56.36%, 250=35.85% 00:30:40.341 cpu : usr=34.48%, sys=0.73%, ctx=985, majf=0, minf=9 00:30:40.342 IO depths : 1=2.5%, 2=5.4%, 4=15.1%, 8=66.3%, 16=10.6%, 32=0.0%, >=64=0.0% 00:30:40.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.342 complete : 0=0.0%, 4=91.1%, 8=3.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.342 issued rwts: total=1785,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.342 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.342 filename0: (groupid=0, jobs=1): err= 0: pid=116746: Mon Jul 15 13:29:35 2024 00:30:40.342 read: IOPS=183, BW=733KiB/s (750kB/s)(7328KiB/10003msec) 00:30:40.342 slat (usec): min=3, max=8036, avg=16.11, stdev=187.60 00:30:40.342 clat (msec): min=20, max=178, avg=87.27, stdev=26.95 00:30:40.342 lat (msec): min=20, max=178, avg=87.28, stdev=26.94 00:30:40.342 clat percentiles (msec): 00:30:40.342 | 1.00th=[ 24], 5.00th=[ 46], 10.00th=[ 60], 20.00th=[ 69], 00:30:40.342 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 95], 00:30:40.342 | 70.00th=[ 103], 80.00th=[ 110], 90.00th=[ 125], 95.00th=[ 134], 00:30:40.342 | 99.00th=[ 146], 99.50th=[ 161], 99.90th=[ 171], 99.95th=[ 180], 00:30:40.342 | 99.99th=[ 180] 00:30:40.342 bw ( KiB/s): min= 464, max= 1080, per=3.70%, avg=730.16, stdev=158.51, samples=19 00:30:40.342 iops : min= 116, max= 270, avg=182.42, stdev=39.63, samples=19 00:30:40.342 lat (msec) : 50=8.79%, 100=60.48%, 250=30.73% 00:30:40.342 cpu : usr=35.88%, sys=0.85%, ctx=1091, majf=0, minf=9 00:30:40.342 IO depths : 1=2.2%, 2=5.2%, 4=15.4%, 8=66.3%, 16=10.9%, 32=0.0%, >=64=0.0% 00:30:40.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.342 complete : 0=0.0%, 4=91.4%, 8=3.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.342 issued rwts: total=1832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.342 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.342 filename0: (groupid=0, jobs=1): err= 0: pid=116747: Mon Jul 15 13:29:35 2024 00:30:40.342 read: IOPS=214, BW=858KiB/s (878kB/s)(8628KiB/10058msec) 00:30:40.342 slat (usec): min=5, max=7019, avg=15.97, stdev=151.04 00:30:40.342 clat (msec): min=20, max=183, avg=74.44, stdev=24.40 00:30:40.342 lat (msec): min=20, max=183, avg=74.45, stdev=24.40 00:30:40.342 clat percentiles (msec): 00:30:40.342 | 1.00th=[ 24], 5.00th=[ 37], 10.00th=[ 44], 20.00th=[ 54], 00:30:40.342 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 78], 00:30:40.342 | 70.00th=[ 85], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 117], 00:30:40.342 | 99.00th=[ 130], 99.50th=[ 133], 99.90th=[ 184], 99.95th=[ 184], 00:30:40.342 | 99.99th=[ 184] 00:30:40.342 bw ( KiB/s): min= 640, max= 1154, per=4.34%, avg=856.30, stdev=158.74, samples=20 00:30:40.342 iops : min= 160, max= 288, avg=214.00, stdev=39.65, samples=20 00:30:40.342 lat (msec) : 50=17.20%, 100=66.43%, 250=16.37% 00:30:40.342 cpu : usr=35.95%, sys=0.91%, ctx=1022, majf=0, minf=9 00:30:40.342 IO depths : 1=1.7%, 2=3.5%, 4=11.4%, 8=71.9%, 16=11.6%, 32=0.0%, >=64=0.0% 00:30:40.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.342 complete : 0=0.0%, 4=90.2%, 8=4.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.342 issued rwts: total=2157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.342 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.342 filename0: (groupid=0, jobs=1): err= 0: pid=116748: Mon Jul 15 13:29:35 2024 00:30:40.342 read: IOPS=198, BW=792KiB/s (811kB/s)(7960KiB/10047msec) 00:30:40.342 slat (usec): min=4, max=8024, avg=19.40, stdev=253.93 00:30:40.342 clat (msec): min=23, max=182, avg=80.66, stdev=27.55 00:30:40.342 lat (msec): min=23, max=182, avg=80.68, stdev=27.55 00:30:40.342 clat percentiles (msec): 00:30:40.342 | 1.00th=[ 26], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 59], 00:30:40.342 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 85], 00:30:40.342 | 70.00th=[ 95], 80.00th=[ 105], 90.00th=[ 120], 95.00th=[ 131], 00:30:40.342 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 182], 99.95th=[ 182], 00:30:40.342 | 99.99th=[ 182] 00:30:40.342 bw ( KiB/s): min= 512, max= 1200, per=4.00%, avg=788.15, stdev=192.22, samples=20 00:30:40.342 iops : min= 128, max= 300, avg=197.00, stdev=48.09, samples=20 00:30:40.342 lat (msec) : 50=14.62%, 100=63.27%, 250=22.11% 00:30:40.342 cpu : usr=35.70%, sys=1.00%, ctx=973, majf=0, minf=9 00:30:40.342 IO depths : 1=1.5%, 2=3.3%, 4=10.7%, 8=72.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:30:40.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.342 complete : 0=0.0%, 4=90.5%, 8=5.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.342 issued rwts: total=1990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.342 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.342 filename0: (groupid=0, jobs=1): err= 0: pid=116749: Mon Jul 15 13:29:35 2024 00:30:40.342 read: IOPS=234, BW=938KiB/s (961kB/s)(9436KiB/10055msec) 00:30:40.342 slat (usec): min=4, max=9020, avg=18.45, stdev=221.18 00:30:40.342 clat (msec): min=17, max=152, avg=68.05, stdev=22.65 00:30:40.342 lat (msec): min=17, max=152, avg=68.07, stdev=22.66 00:30:40.342 clat percentiles (msec): 00:30:40.342 | 1.00th=[ 28], 5.00th=[ 39], 10.00th=[ 44], 20.00th=[ 48], 00:30:40.342 | 30.00th=[ 54], 40.00th=[ 60], 50.00th=[ 67], 60.00th=[ 72], 00:30:40.342 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 101], 95.00th=[ 112], 00:30:40.342 | 99.00th=[ 123], 99.50th=[ 136], 99.90th=[ 153], 99.95th=[ 153], 00:30:40.342 | 99.99th=[ 153] 00:30:40.342 bw ( KiB/s): min= 560, max= 1200, per=4.75%, avg=937.00, stdev=195.69, samples=20 00:30:40.342 iops : min= 140, max= 300, avg=234.20, stdev=48.93, samples=20 00:30:40.342 lat (msec) : 20=0.17%, 50=26.07%, 100=63.97%, 250=9.79% 00:30:40.342 cpu : usr=46.43%, sys=1.16%, ctx=1632, majf=0, minf=9 00:30:40.342 IO depths : 1=1.4%, 2=3.1%, 4=10.9%, 8=73.0%, 16=11.7%, 32=0.0%, >=64=0.0% 00:30:40.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.342 complete : 0=0.0%, 4=90.2%, 8=4.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.342 issued rwts: total=2359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.342 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.342 filename0: (groupid=0, jobs=1): err= 0: pid=116750: Mon Jul 15 13:29:35 2024 00:30:40.342 read: IOPS=193, BW=775KiB/s (794kB/s)(7788KiB/10049msec) 00:30:40.342 slat (usec): min=4, max=8025, avg=21.33, stdev=256.32 00:30:40.342 clat (msec): min=22, max=162, avg=82.37, stdev=26.26 00:30:40.342 lat (msec): min=22, max=162, avg=82.39, stdev=26.27 00:30:40.342 clat percentiles (msec): 00:30:40.342 | 1.00th=[ 25], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 62], 00:30:40.342 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 89], 00:30:40.342 | 70.00th=[ 94], 80.00th=[ 105], 90.00th=[ 117], 95.00th=[ 128], 00:30:40.342 | 99.00th=[ 146], 99.50th=[ 155], 99.90th=[ 163], 99.95th=[ 163], 00:30:40.342 | 99.99th=[ 163] 00:30:40.342 bw ( KiB/s): min= 512, max= 1128, per=3.92%, avg=772.30, stdev=161.77, samples=20 00:30:40.342 iops : min= 128, max= 282, avg=193.05, stdev=40.44, samples=20 00:30:40.342 lat (msec) : 50=12.43%, 100=63.59%, 250=23.99% 00:30:40.342 cpu : usr=38.28%, sys=0.95%, ctx=1049, majf=0, minf=9 00:30:40.342 IO depths : 1=1.5%, 2=3.3%, 4=10.6%, 8=72.6%, 16=12.0%, 32=0.0%, >=64=0.0% 00:30:40.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.342 complete : 0=0.0%, 4=90.2%, 8=5.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.342 issued rwts: total=1947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.342 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.342 filename1: (groupid=0, jobs=1): err= 0: pid=116751: Mon Jul 15 13:29:35 2024 00:30:40.342 read: IOPS=181, BW=725KiB/s (742kB/s)(7284KiB/10050msec) 00:30:40.342 slat (usec): min=3, max=8017, avg=21.90, stdev=240.04 00:30:40.342 clat (msec): min=38, max=183, avg=87.98, stdev=24.45 00:30:40.342 lat (msec): min=38, max=183, avg=88.00, stdev=24.45 00:30:40.342 clat percentiles (msec): 00:30:40.342 | 1.00th=[ 45], 5.00th=[ 49], 10.00th=[ 58], 20.00th=[ 68], 00:30:40.342 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 86], 60.00th=[ 97], 00:30:40.342 | 70.00th=[ 103], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 136], 00:30:40.342 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 184], 99.95th=[ 184], 00:30:40.342 | 99.99th=[ 184] 00:30:40.342 bw ( KiB/s): min= 512, max= 1032, per=3.66%, avg=721.65, stdev=140.10, samples=20 00:30:40.342 iops : min= 128, max= 258, avg=180.40, stdev=35.02, samples=20 00:30:40.342 lat (msec) : 50=5.77%, 100=58.98%, 250=35.26% 00:30:40.342 cpu : usr=38.55%, sys=1.08%, ctx=1258, majf=0, minf=9 00:30:40.342 IO depths : 1=2.7%, 2=6.0%, 4=16.2%, 8=65.0%, 16=10.0%, 32=0.0%, >=64=0.0% 00:30:40.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.342 complete : 0=0.0%, 4=91.4%, 8=3.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.342 issued rwts: total=1821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.342 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.342 filename1: (groupid=0, jobs=1): err= 0: pid=116752: Mon Jul 15 13:29:35 2024 00:30:40.342 read: IOPS=250, BW=1002KiB/s (1026kB/s)(9.86MiB/10076msec) 00:30:40.342 slat (usec): min=7, max=7029, avg=16.05, stdev=179.61 00:30:40.342 clat (msec): min=2, max=130, avg=63.73, stdev=21.26 00:30:40.342 lat (msec): min=2, max=130, avg=63.75, stdev=21.26 00:30:40.342 clat percentiles (msec): 00:30:40.342 | 1.00th=[ 4], 5.00th=[ 33], 10.00th=[ 43], 20.00th=[ 48], 00:30:40.342 | 30.00th=[ 50], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 70], 00:30:40.342 | 70.00th=[ 75], 80.00th=[ 82], 90.00th=[ 89], 95.00th=[ 99], 00:30:40.342 | 99.00th=[ 117], 99.50th=[ 127], 99.90th=[ 131], 99.95th=[ 131], 00:30:40.342 | 99.99th=[ 131] 00:30:40.342 bw ( KiB/s): min= 736, max= 1792, per=5.08%, avg=1002.65, stdev=223.31, samples=20 00:30:40.342 iops : min= 184, max= 448, avg=250.60, stdev=55.84, samples=20 00:30:40.342 lat (msec) : 4=1.27%, 10=1.27%, 20=0.63%, 50=27.98%, 100=64.45% 00:30:40.342 lat (msec) : 250=4.40% 00:30:40.342 cpu : usr=39.46%, sys=1.00%, ctx=1238, majf=0, minf=9 00:30:40.342 IO depths : 1=0.6%, 2=1.5%, 4=7.5%, 8=77.3%, 16=13.1%, 32=0.0%, >=64=0.0% 00:30:40.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.342 complete : 0=0.0%, 4=89.5%, 8=6.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.342 issued rwts: total=2523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.342 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.342 filename1: (groupid=0, jobs=1): err= 0: pid=116753: Mon Jul 15 13:29:35 2024 00:30:40.342 read: IOPS=186, BW=747KiB/s (765kB/s)(7496KiB/10036msec) 00:30:40.342 slat (usec): min=5, max=4032, avg=15.97, stdev=131.20 00:30:40.342 clat (msec): min=21, max=170, avg=85.56, stdev=26.28 00:30:40.342 lat (msec): min=21, max=170, avg=85.57, stdev=26.28 00:30:40.342 clat percentiles (msec): 00:30:40.342 | 1.00th=[ 32], 5.00th=[ 45], 10.00th=[ 57], 20.00th=[ 68], 00:30:40.342 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 88], 00:30:40.342 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 138], 00:30:40.342 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 171], 99.95th=[ 171], 00:30:40.342 | 99.99th=[ 171] 00:30:40.342 bw ( KiB/s): min= 512, max= 1024, per=3.76%, avg=742.50, stdev=138.10, samples=20 00:30:40.342 iops : min= 128, max= 256, avg=185.55, stdev=34.50, samples=20 00:30:40.342 lat (msec) : 50=6.78%, 100=66.38%, 250=26.84% 00:30:40.342 cpu : usr=41.63%, sys=0.89%, ctx=1160, majf=0, minf=9 00:30:40.342 IO depths : 1=3.3%, 2=6.9%, 4=16.5%, 8=63.8%, 16=9.5%, 32=0.0%, >=64=0.0% 00:30:40.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.343 complete : 0=0.0%, 4=91.9%, 8=2.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.343 issued rwts: total=1874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.343 filename1: (groupid=0, jobs=1): err= 0: pid=116754: Mon Jul 15 13:29:35 2024 00:30:40.343 read: IOPS=199, BW=800KiB/s (819kB/s)(8036KiB/10046msec) 00:30:40.343 slat (usec): min=4, max=8018, avg=18.85, stdev=218.92 00:30:40.343 clat (msec): min=14, max=155, avg=79.77, stdev=25.55 00:30:40.343 lat (msec): min=14, max=155, avg=79.79, stdev=25.56 00:30:40.343 clat percentiles (msec): 00:30:40.343 | 1.00th=[ 24], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 61], 00:30:40.343 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:30:40.343 | 70.00th=[ 91], 80.00th=[ 103], 90.00th=[ 116], 95.00th=[ 121], 00:30:40.343 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 157], 99.95th=[ 157], 00:30:40.343 | 99.99th=[ 157] 00:30:40.343 bw ( KiB/s): min= 512, max= 1176, per=4.05%, avg=799.90, stdev=154.29, samples=20 00:30:40.343 iops : min= 128, max= 294, avg=199.95, stdev=38.59, samples=20 00:30:40.343 lat (msec) : 20=0.30%, 50=12.44%, 100=65.41%, 250=21.85% 00:30:40.343 cpu : usr=40.53%, sys=0.94%, ctx=1121, majf=0, minf=9 00:30:40.343 IO depths : 1=2.3%, 2=5.1%, 4=15.6%, 8=66.4%, 16=10.7%, 32=0.0%, >=64=0.0% 00:30:40.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.343 complete : 0=0.0%, 4=91.2%, 8=3.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.343 issued rwts: total=2009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.343 filename1: (groupid=0, jobs=1): err= 0: pid=116755: Mon Jul 15 13:29:35 2024 00:30:40.343 read: IOPS=203, BW=816KiB/s (835kB/s)(8196KiB/10048msec) 00:30:40.343 slat (usec): min=3, max=4019, avg=12.95, stdev=88.71 00:30:40.343 clat (msec): min=23, max=196, avg=78.21, stdev=30.23 00:30:40.343 lat (msec): min=23, max=196, avg=78.23, stdev=30.23 00:30:40.343 clat percentiles (msec): 00:30:40.343 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 53], 00:30:40.343 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 81], 00:30:40.343 | 70.00th=[ 90], 80.00th=[ 105], 90.00th=[ 124], 95.00th=[ 136], 00:30:40.343 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 197], 99.95th=[ 197], 00:30:40.343 | 99.99th=[ 197] 00:30:40.343 bw ( KiB/s): min= 464, max= 1168, per=4.13%, avg=813.10, stdev=216.97, samples=20 00:30:40.343 iops : min= 116, max= 292, avg=203.25, stdev=54.22, samples=20 00:30:40.343 lat (msec) : 50=18.98%, 100=57.93%, 250=23.08% 00:30:40.343 cpu : usr=38.10%, sys=0.81%, ctx=1224, majf=0, minf=0 00:30:40.343 IO depths : 1=0.9%, 2=2.0%, 4=8.4%, 8=75.4%, 16=13.3%, 32=0.0%, >=64=0.0% 00:30:40.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.343 complete : 0=0.0%, 4=89.9%, 8=6.1%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.343 issued rwts: total=2049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.343 filename1: (groupid=0, jobs=1): err= 0: pid=116756: Mon Jul 15 13:29:35 2024 00:30:40.343 read: IOPS=181, BW=727KiB/s (744kB/s)(7288KiB/10030msec) 00:30:40.343 slat (nsec): min=3964, max=63640, avg=11507.57, stdev=5781.26 00:30:40.343 clat (msec): min=22, max=194, avg=88.00, stdev=28.93 00:30:40.343 lat (msec): min=22, max=194, avg=88.01, stdev=28.93 00:30:40.343 clat percentiles (msec): 00:30:40.343 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 52], 20.00th=[ 68], 00:30:40.343 | 30.00th=[ 71], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 95], 00:30:40.343 | 70.00th=[ 99], 80.00th=[ 108], 90.00th=[ 128], 95.00th=[ 142], 00:30:40.343 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 194], 99.95th=[ 194], 00:30:40.343 | 99.99th=[ 194] 00:30:40.343 bw ( KiB/s): min= 512, max= 1024, per=3.68%, avg=725.74, stdev=148.54, samples=19 00:30:40.343 iops : min= 128, max= 256, avg=181.32, stdev=37.08, samples=19 00:30:40.343 lat (msec) : 50=8.56%, 100=63.34%, 250=28.10% 00:30:40.343 cpu : usr=37.84%, sys=0.99%, ctx=1187, majf=0, minf=9 00:30:40.343 IO depths : 1=2.1%, 2=4.9%, 4=15.2%, 8=67.0%, 16=10.8%, 32=0.0%, >=64=0.0% 00:30:40.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.343 complete : 0=0.0%, 4=91.1%, 8=3.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.343 issued rwts: total=1822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.343 filename1: (groupid=0, jobs=1): err= 0: pid=116757: Mon Jul 15 13:29:35 2024 00:30:40.343 read: IOPS=192, BW=771KiB/s (790kB/s)(7740KiB/10035msec) 00:30:40.343 slat (usec): min=3, max=4018, avg=12.74, stdev=91.22 00:30:40.343 clat (msec): min=23, max=171, avg=82.82, stdev=27.67 00:30:40.343 lat (msec): min=23, max=171, avg=82.83, stdev=27.67 00:30:40.343 clat percentiles (msec): 00:30:40.343 | 1.00th=[ 24], 5.00th=[ 40], 10.00th=[ 52], 20.00th=[ 64], 00:30:40.343 | 30.00th=[ 68], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 86], 00:30:40.343 | 70.00th=[ 95], 80.00th=[ 103], 90.00th=[ 121], 95.00th=[ 136], 00:30:40.343 | 99.00th=[ 159], 99.50th=[ 169], 99.90th=[ 169], 99.95th=[ 171], 00:30:40.343 | 99.99th=[ 171] 00:30:40.343 bw ( KiB/s): min= 384, max= 1408, per=3.89%, avg=767.00, stdev=208.42, samples=20 00:30:40.343 iops : min= 96, max= 352, avg=191.65, stdev=52.12, samples=20 00:30:40.343 lat (msec) : 50=9.97%, 100=67.70%, 250=22.33% 00:30:40.343 cpu : usr=41.74%, sys=0.99%, ctx=1152, majf=0, minf=9 00:30:40.343 IO depths : 1=1.6%, 2=3.3%, 4=10.2%, 8=72.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:30:40.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.343 complete : 0=0.0%, 4=90.3%, 8=5.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.343 issued rwts: total=1935,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.343 filename1: (groupid=0, jobs=1): err= 0: pid=116758: Mon Jul 15 13:29:35 2024 00:30:40.343 read: IOPS=222, BW=890KiB/s (911kB/s)(8904KiB/10007msec) 00:30:40.343 slat (nsec): min=3932, max=51613, avg=10669.02, stdev=4330.08 00:30:40.343 clat (msec): min=22, max=188, avg=71.87, stdev=24.73 00:30:40.343 lat (msec): min=22, max=188, avg=71.88, stdev=24.73 00:30:40.343 clat percentiles (msec): 00:30:40.343 | 1.00th=[ 24], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 48], 00:30:40.343 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 74], 00:30:40.343 | 70.00th=[ 83], 80.00th=[ 93], 90.00th=[ 104], 95.00th=[ 120], 00:30:40.343 | 99.00th=[ 142], 99.50th=[ 148], 99.90th=[ 190], 99.95th=[ 190], 00:30:40.343 | 99.99th=[ 190] 00:30:40.343 bw ( KiB/s): min= 512, max= 1152, per=4.48%, avg=883.90, stdev=183.18, samples=20 00:30:40.343 iops : min= 128, max= 288, avg=220.90, stdev=45.77, samples=20 00:30:40.343 lat (msec) : 50=25.92%, 100=62.13%, 250=11.95% 00:30:40.343 cpu : usr=34.71%, sys=0.61%, ctx=1038, majf=0, minf=9 00:30:40.343 IO depths : 1=0.3%, 2=0.7%, 4=5.5%, 8=79.6%, 16=14.0%, 32=0.0%, >=64=0.0% 00:30:40.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.343 complete : 0=0.0%, 4=89.1%, 8=7.0%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.343 issued rwts: total=2226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.343 filename2: (groupid=0, jobs=1): err= 0: pid=116759: Mon Jul 15 13:29:35 2024 00:30:40.343 read: IOPS=202, BW=812KiB/s (831kB/s)(8160KiB/10055msec) 00:30:40.343 slat (nsec): min=4840, max=48769, avg=11069.02, stdev=4432.68 00:30:40.343 clat (msec): min=21, max=178, avg=78.74, stdev=28.98 00:30:40.343 lat (msec): min=21, max=178, avg=78.75, stdev=28.98 00:30:40.343 clat percentiles (msec): 00:30:40.343 | 1.00th=[ 23], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 54], 00:30:40.343 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 85], 00:30:40.343 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 117], 95.00th=[ 128], 00:30:40.343 | 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 180], 00:30:40.343 | 99.99th=[ 180] 00:30:40.343 bw ( KiB/s): min= 472, max= 1192, per=4.10%, avg=809.35, stdev=193.72, samples=20 00:30:40.343 iops : min= 118, max= 298, avg=202.30, stdev=48.40, samples=20 00:30:40.343 lat (msec) : 50=19.22%, 100=56.13%, 250=24.66% 00:30:40.343 cpu : usr=32.48%, sys=0.69%, ctx=916, majf=0, minf=9 00:30:40.343 IO depths : 1=0.5%, 2=1.1%, 4=7.7%, 8=77.6%, 16=12.9%, 32=0.0%, >=64=0.0% 00:30:40.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.343 complete : 0=0.0%, 4=89.2%, 8=6.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.343 issued rwts: total=2040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.343 filename2: (groupid=0, jobs=1): err= 0: pid=116760: Mon Jul 15 13:29:35 2024 00:30:40.343 read: IOPS=209, BW=837KiB/s (857kB/s)(8420KiB/10060msec) 00:30:40.343 slat (usec): min=6, max=8034, avg=21.50, stdev=262.02 00:30:40.343 clat (msec): min=21, max=147, avg=76.21, stdev=24.40 00:30:40.343 lat (msec): min=21, max=147, avg=76.24, stdev=24.41 00:30:40.343 clat percentiles (msec): 00:30:40.343 | 1.00th=[ 23], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 53], 00:30:40.343 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 74], 60.00th=[ 82], 00:30:40.343 | 70.00th=[ 87], 80.00th=[ 101], 90.00th=[ 109], 95.00th=[ 117], 00:30:40.343 | 99.00th=[ 138], 99.50th=[ 148], 99.90th=[ 148], 99.95th=[ 148], 00:30:40.343 | 99.99th=[ 148] 00:30:40.343 bw ( KiB/s): min= 600, max= 1277, per=4.25%, avg=837.05, stdev=175.88, samples=20 00:30:40.343 iops : min= 150, max= 319, avg=209.20, stdev=43.91, samples=20 00:30:40.343 lat (msec) : 50=15.87%, 100=64.42%, 250=19.71% 00:30:40.343 cpu : usr=38.11%, sys=0.86%, ctx=1024, majf=0, minf=9 00:30:40.343 IO depths : 1=1.0%, 2=2.0%, 4=9.8%, 8=74.3%, 16=12.8%, 32=0.0%, >=64=0.0% 00:30:40.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.343 complete : 0=0.0%, 4=89.6%, 8=6.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.343 issued rwts: total=2105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.343 filename2: (groupid=0, jobs=1): err= 0: pid=116761: Mon Jul 15 13:29:35 2024 00:30:40.343 read: IOPS=227, BW=912KiB/s (934kB/s)(9160KiB/10045msec) 00:30:40.343 slat (usec): min=4, max=8021, avg=18.72, stdev=206.19 00:30:40.343 clat (msec): min=6, max=153, avg=69.86, stdev=24.83 00:30:40.343 lat (msec): min=6, max=153, avg=69.88, stdev=24.83 00:30:40.343 clat percentiles (msec): 00:30:40.343 | 1.00th=[ 17], 5.00th=[ 39], 10.00th=[ 43], 20.00th=[ 48], 00:30:40.343 | 30.00th=[ 55], 40.00th=[ 62], 50.00th=[ 68], 60.00th=[ 72], 00:30:40.343 | 70.00th=[ 82], 80.00th=[ 90], 90.00th=[ 104], 95.00th=[ 113], 00:30:40.343 | 99.00th=[ 146], 99.50th=[ 148], 99.90th=[ 155], 99.95th=[ 155], 00:30:40.343 | 99.99th=[ 155] 00:30:40.343 bw ( KiB/s): min= 512, max= 1165, per=4.61%, avg=909.35, stdev=185.49, samples=20 00:30:40.343 iops : min= 128, max= 291, avg=227.30, stdev=46.34, samples=20 00:30:40.343 lat (msec) : 10=0.70%, 20=0.52%, 50=22.88%, 100=64.50%, 250=11.40% 00:30:40.343 cpu : usr=39.10%, sys=1.02%, ctx=1086, majf=0, minf=9 00:30:40.343 IO depths : 1=1.7%, 2=3.5%, 4=11.0%, 8=72.4%, 16=11.4%, 32=0.0%, >=64=0.0% 00:30:40.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.343 complete : 0=0.0%, 4=90.4%, 8=4.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.343 issued rwts: total=2290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.344 filename2: (groupid=0, jobs=1): err= 0: pid=116762: Mon Jul 15 13:29:35 2024 00:30:40.344 read: IOPS=210, BW=843KiB/s (863kB/s)(8484KiB/10067msec) 00:30:40.344 slat (usec): min=4, max=8036, avg=18.54, stdev=246.23 00:30:40.344 clat (msec): min=8, max=202, avg=75.69, stdev=27.34 00:30:40.344 lat (msec): min=8, max=202, avg=75.71, stdev=27.34 00:30:40.344 clat percentiles (msec): 00:30:40.344 | 1.00th=[ 14], 5.00th=[ 34], 10.00th=[ 47], 20.00th=[ 58], 00:30:40.344 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 80], 00:30:40.344 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 129], 00:30:40.344 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 163], 99.95th=[ 203], 00:30:40.344 | 99.99th=[ 203] 00:30:40.344 bw ( KiB/s): min= 512, max= 1200, per=4.27%, avg=841.70, stdev=188.72, samples=20 00:30:40.344 iops : min= 128, max= 300, avg=210.35, stdev=47.15, samples=20 00:30:40.344 lat (msec) : 10=0.71%, 20=0.80%, 50=15.46%, 100=68.13%, 250=14.90% 00:30:40.344 cpu : usr=34.29%, sys=0.78%, ctx=995, majf=0, minf=9 00:30:40.344 IO depths : 1=1.2%, 2=2.9%, 4=11.9%, 8=72.0%, 16=11.9%, 32=0.0%, >=64=0.0% 00:30:40.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.344 complete : 0=0.0%, 4=90.3%, 8=4.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.344 issued rwts: total=2121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.344 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.344 filename2: (groupid=0, jobs=1): err= 0: pid=116763: Mon Jul 15 13:29:35 2024 00:30:40.344 read: IOPS=234, BW=937KiB/s (959kB/s)(9416KiB/10053msec) 00:30:40.344 slat (usec): min=4, max=8025, avg=26.71, stdev=280.07 00:30:40.344 clat (msec): min=20, max=145, avg=68.08, stdev=24.38 00:30:40.344 lat (msec): min=20, max=145, avg=68.11, stdev=24.38 00:30:40.344 clat percentiles (msec): 00:30:40.344 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 43], 20.00th=[ 48], 00:30:40.344 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:30:40.344 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 104], 95.00th=[ 121], 00:30:40.344 | 99.00th=[ 138], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 146], 00:30:40.344 | 99.99th=[ 146] 00:30:40.344 bw ( KiB/s): min= 688, max= 1328, per=4.74%, avg=935.00, stdev=179.80, samples=20 00:30:40.344 iops : min= 172, max= 332, avg=233.70, stdev=44.96, samples=20 00:30:40.344 lat (msec) : 50=27.53%, 100=61.30%, 250=11.17% 00:30:40.344 cpu : usr=40.81%, sys=0.84%, ctx=1221, majf=0, minf=10 00:30:40.344 IO depths : 1=0.8%, 2=1.7%, 4=8.4%, 8=76.5%, 16=12.7%, 32=0.0%, >=64=0.0% 00:30:40.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.344 complete : 0=0.0%, 4=89.5%, 8=6.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.344 issued rwts: total=2354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.344 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.344 filename2: (groupid=0, jobs=1): err= 0: pid=116764: Mon Jul 15 13:29:35 2024 00:30:40.344 read: IOPS=223, BW=893KiB/s (915kB/s)(8976KiB/10049msec) 00:30:40.344 slat (usec): min=3, max=6022, avg=15.50, stdev=152.58 00:30:40.344 clat (msec): min=16, max=155, avg=71.40, stdev=24.20 00:30:40.344 lat (msec): min=16, max=155, avg=71.42, stdev=24.20 00:30:40.344 clat percentiles (msec): 00:30:40.344 | 1.00th=[ 22], 5.00th=[ 34], 10.00th=[ 41], 20.00th=[ 52], 00:30:40.344 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 74], 00:30:40.344 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 104], 95.00th=[ 110], 00:30:40.344 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 157], 99.95th=[ 157], 00:30:40.344 | 99.99th=[ 157] 00:30:40.344 bw ( KiB/s): min= 560, max= 1536, per=4.52%, avg=890.95, stdev=223.22, samples=20 00:30:40.344 iops : min= 140, max= 384, avg=222.70, stdev=55.78, samples=20 00:30:40.344 lat (msec) : 20=0.71%, 50=18.94%, 100=67.74%, 250=12.61% 00:30:40.344 cpu : usr=42.17%, sys=0.94%, ctx=1337, majf=0, minf=9 00:30:40.344 IO depths : 1=1.7%, 2=4.3%, 4=13.7%, 8=68.8%, 16=11.5%, 32=0.0%, >=64=0.0% 00:30:40.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.344 complete : 0=0.0%, 4=91.0%, 8=4.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.344 issued rwts: total=2244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.344 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.344 filename2: (groupid=0, jobs=1): err= 0: pid=116765: Mon Jul 15 13:29:35 2024 00:30:40.344 read: IOPS=202, BW=810KiB/s (830kB/s)(8128KiB/10032msec) 00:30:40.344 slat (usec): min=4, max=8029, avg=26.18, stdev=334.52 00:30:40.344 clat (msec): min=25, max=164, avg=78.81, stdev=25.10 00:30:40.344 lat (msec): min=25, max=164, avg=78.83, stdev=25.10 00:30:40.344 clat percentiles (msec): 00:30:40.344 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 58], 00:30:40.344 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 84], 00:30:40.344 | 70.00th=[ 90], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 129], 00:30:40.344 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 165], 99.95th=[ 165], 00:30:40.344 | 99.99th=[ 165] 00:30:40.344 bw ( KiB/s): min= 512, max= 1072, per=4.08%, avg=805.85, stdev=151.72, samples=20 00:30:40.344 iops : min= 128, max= 268, avg=201.40, stdev=37.94, samples=20 00:30:40.344 lat (msec) : 50=15.16%, 100=66.34%, 250=18.50% 00:30:40.344 cpu : usr=32.42%, sys=0.72%, ctx=912, majf=0, minf=9 00:30:40.344 IO depths : 1=1.1%, 2=2.5%, 4=9.8%, 8=74.1%, 16=12.5%, 32=0.0%, >=64=0.0% 00:30:40.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.344 complete : 0=0.0%, 4=89.9%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.344 issued rwts: total=2032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.344 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.344 filename2: (groupid=0, jobs=1): err= 0: pid=116766: Mon Jul 15 13:29:35 2024 00:30:40.344 read: IOPS=214, BW=857KiB/s (878kB/s)(8620KiB/10053msec) 00:30:40.344 slat (usec): min=4, max=8034, avg=48.29, stdev=544.74 00:30:40.344 clat (msec): min=21, max=148, avg=74.29, stdev=22.49 00:30:40.344 lat (msec): min=21, max=148, avg=74.34, stdev=22.50 00:30:40.344 clat percentiles (msec): 00:30:40.344 | 1.00th=[ 27], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 58], 00:30:40.344 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:30:40.344 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 115], 00:30:40.344 | 99.00th=[ 131], 99.50th=[ 140], 99.90th=[ 148], 99.95th=[ 148], 00:30:40.344 | 99.99th=[ 148] 00:30:40.344 bw ( KiB/s): min= 560, max= 1104, per=4.34%, avg=855.20, stdev=150.20, samples=20 00:30:40.344 iops : min= 140, max= 276, avg=213.80, stdev=37.55, samples=20 00:30:40.344 lat (msec) : 50=16.98%, 100=70.67%, 250=12.34% 00:30:40.344 cpu : usr=37.17%, sys=0.93%, ctx=1131, majf=0, minf=9 00:30:40.344 IO depths : 1=1.8%, 2=3.9%, 4=11.8%, 8=71.1%, 16=11.4%, 32=0.0%, >=64=0.0% 00:30:40.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.344 complete : 0=0.0%, 4=90.5%, 8=4.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.344 issued rwts: total=2155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.344 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.344 00:30:40.344 Run status group 0 (all jobs): 00:30:40.344 READ: bw=19.2MiB/s (20.2MB/s), 712KiB/s-1002KiB/s (729kB/s-1026kB/s), io=194MiB (203MB), run=10003-10076msec 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.344 bdev_null0 00:30:40.344 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.345 [2024-07-15 13:29:35.323848] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.345 bdev_null1 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:40.345 { 00:30:40.345 "params": { 00:30:40.345 "name": "Nvme$subsystem", 00:30:40.345 "trtype": "$TEST_TRANSPORT", 00:30:40.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.345 "adrfam": "ipv4", 00:30:40.345 "trsvcid": "$NVMF_PORT", 00:30:40.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.345 "hdgst": ${hdgst:-false}, 00:30:40.345 "ddgst": ${ddgst:-false} 00:30:40.345 }, 00:30:40.345 "method": "bdev_nvme_attach_controller" 00:30:40.345 } 00:30:40.345 EOF 00:30:40.345 )") 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:40.345 { 00:30:40.345 "params": { 00:30:40.345 "name": "Nvme$subsystem", 00:30:40.345 "trtype": "$TEST_TRANSPORT", 00:30:40.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.345 "adrfam": "ipv4", 00:30:40.345 "trsvcid": "$NVMF_PORT", 00:30:40.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.345 "hdgst": ${hdgst:-false}, 00:30:40.345 "ddgst": ${ddgst:-false} 00:30:40.345 }, 00:30:40.345 "method": "bdev_nvme_attach_controller" 00:30:40.345 } 00:30:40.345 EOF 00:30:40.345 )") 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:40.345 "params": { 00:30:40.345 "name": "Nvme0", 00:30:40.345 "trtype": "tcp", 00:30:40.345 "traddr": "10.0.0.2", 00:30:40.345 "adrfam": "ipv4", 00:30:40.345 "trsvcid": "4420", 00:30:40.345 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:40.345 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:40.345 "hdgst": false, 00:30:40.345 "ddgst": false 00:30:40.345 }, 00:30:40.345 "method": "bdev_nvme_attach_controller" 00:30:40.345 },{ 00:30:40.345 "params": { 00:30:40.345 "name": "Nvme1", 00:30:40.345 "trtype": "tcp", 00:30:40.345 "traddr": "10.0.0.2", 00:30:40.345 "adrfam": "ipv4", 00:30:40.345 "trsvcid": "4420", 00:30:40.345 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:40.345 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:40.345 "hdgst": false, 00:30:40.345 "ddgst": false 00:30:40.345 }, 00:30:40.345 "method": "bdev_nvme_attach_controller" 00:30:40.345 }' 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:40.345 13:29:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:40.345 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:40.345 ... 00:30:40.345 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:40.345 ... 00:30:40.345 fio-3.35 00:30:40.345 Starting 4 threads 00:30:44.531 00:30:44.531 filename0: (groupid=0, jobs=1): err= 0: pid=116893: Mon Jul 15 13:29:41 2024 00:30:44.531 read: IOPS=1961, BW=15.3MiB/s (16.1MB/s)(76.6MiB/5001msec) 00:30:44.531 slat (nsec): min=6845, max=59080, avg=13859.70, stdev=5166.29 00:30:44.531 clat (usec): min=2178, max=7486, avg=4007.37, stdev=165.05 00:30:44.531 lat (usec): min=2185, max=7512, avg=4021.23, stdev=165.45 00:30:44.531 clat percentiles (usec): 00:30:44.531 | 1.00th=[ 3785], 5.00th=[ 3851], 10.00th=[ 3884], 20.00th=[ 3916], 00:30:44.531 | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4015], 00:30:44.531 | 70.00th=[ 4047], 80.00th=[ 4080], 90.00th=[ 4113], 95.00th=[ 4146], 00:30:44.531 | 99.00th=[ 4228], 99.50th=[ 4359], 99.90th=[ 6194], 99.95th=[ 7439], 00:30:44.531 | 99.99th=[ 7504] 00:30:44.531 bw ( KiB/s): min=15360, max=15872, per=24.99%, avg=15687.11, stdev=170.67, samples=9 00:30:44.531 iops : min= 1920, max= 1984, avg=1960.89, stdev=21.33, samples=9 00:30:44.531 lat (msec) : 4=47.11%, 10=52.89% 00:30:44.531 cpu : usr=93.74%, sys=5.16%, ctx=7, majf=0, minf=0 00:30:44.531 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:44.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.531 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.531 issued rwts: total=9808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:44.531 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:44.531 filename0: (groupid=0, jobs=1): err= 0: pid=116894: Mon Jul 15 13:29:41 2024 00:30:44.531 read: IOPS=1963, BW=15.3MiB/s (16.1MB/s)(76.7MiB/5003msec) 00:30:44.531 slat (nsec): min=6912, max=58627, avg=11099.96, stdev=4895.46 00:30:44.531 clat (usec): min=2004, max=5089, avg=4030.65, stdev=137.17 00:30:44.531 lat (usec): min=2011, max=5107, avg=4041.75, stdev=136.76 00:30:44.531 clat percentiles (usec): 00:30:44.531 | 1.00th=[ 3621], 5.00th=[ 3884], 10.00th=[ 3916], 20.00th=[ 3949], 00:30:44.531 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4047], 00:30:44.531 | 70.00th=[ 4080], 80.00th=[ 4113], 90.00th=[ 4146], 95.00th=[ 4178], 00:30:44.531 | 99.00th=[ 4359], 99.50th=[ 4621], 99.90th=[ 4883], 99.95th=[ 4948], 00:30:44.531 | 99.99th=[ 5080] 00:30:44.531 bw ( KiB/s): min=15488, max=15920, per=25.03%, avg=15712.00, stdev=153.26, samples=9 00:30:44.531 iops : min= 1936, max= 1990, avg=1964.00, stdev=19.16, samples=9 00:30:44.531 lat (msec) : 4=36.96%, 10=63.04% 00:30:44.531 cpu : usr=93.84%, sys=4.98%, ctx=15, majf=0, minf=0 00:30:44.531 IO depths : 1=6.1%, 2=12.5%, 4=62.4%, 8=18.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:44.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.531 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.531 issued rwts: total=9822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:44.531 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:44.531 filename1: (groupid=0, jobs=1): err= 0: pid=116895: Mon Jul 15 13:29:41 2024 00:30:44.531 read: IOPS=1962, BW=15.3MiB/s (16.1MB/s)(76.7MiB/5001msec) 00:30:44.531 slat (nsec): min=6540, max=53279, avg=9129.95, stdev=3801.48 00:30:44.531 clat (usec): min=2676, max=4847, avg=4029.43, stdev=110.67 00:30:44.531 lat (usec): min=2682, max=4855, avg=4038.56, stdev=110.95 00:30:44.531 clat percentiles (usec): 00:30:44.531 | 1.00th=[ 3785], 5.00th=[ 3884], 10.00th=[ 3916], 20.00th=[ 3949], 00:30:44.531 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4047], 00:30:44.531 | 70.00th=[ 4080], 80.00th=[ 4113], 90.00th=[ 4146], 95.00th=[ 4178], 00:30:44.531 | 99.00th=[ 4293], 99.50th=[ 4424], 99.90th=[ 4621], 99.95th=[ 4621], 00:30:44.531 | 99.99th=[ 4817] 00:30:44.531 bw ( KiB/s): min=15488, max=15872, per=25.03%, avg=15712.00, stdev=128.00, samples=9 00:30:44.531 iops : min= 1936, max= 1984, avg=1964.00, stdev=16.00, samples=9 00:30:44.531 lat (msec) : 4=35.69%, 10=64.31% 00:30:44.531 cpu : usr=93.20%, sys=5.58%, ctx=11, majf=0, minf=0 00:30:44.531 IO depths : 1=10.8%, 2=25.0%, 4=50.0%, 8=14.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:44.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.531 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.531 issued rwts: total=9816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:44.531 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:44.531 filename1: (groupid=0, jobs=1): err= 0: pid=116896: Mon Jul 15 13:29:41 2024 00:30:44.531 read: IOPS=1962, BW=15.3MiB/s (16.1MB/s)(76.7MiB/5002msec) 00:30:44.531 slat (nsec): min=7035, max=58265, avg=13924.56, stdev=4922.57 00:30:44.531 clat (usec): min=2120, max=5108, avg=4008.96, stdev=121.75 00:30:44.531 lat (usec): min=2133, max=5119, avg=4022.89, stdev=121.67 00:30:44.531 clat percentiles (usec): 00:30:44.531 | 1.00th=[ 3785], 5.00th=[ 3851], 10.00th=[ 3884], 20.00th=[ 3916], 00:30:44.531 | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4047], 00:30:44.531 | 70.00th=[ 4047], 80.00th=[ 4080], 90.00th=[ 4146], 95.00th=[ 4178], 00:30:44.531 | 99.00th=[ 4293], 99.50th=[ 4359], 99.90th=[ 4883], 99.95th=[ 5014], 00:30:44.531 | 99.99th=[ 5080] 00:30:44.531 bw ( KiB/s): min=15390, max=16000, per=25.01%, avg=15704.67, stdev=185.48, samples=9 00:30:44.531 iops : min= 1923, max= 2000, avg=1963.00, stdev=23.35, samples=9 00:30:44.531 lat (msec) : 4=45.57%, 10=54.43% 00:30:44.531 cpu : usr=94.04%, sys=4.80%, ctx=13, majf=0, minf=0 00:30:44.531 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:44.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.531 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.531 issued rwts: total=9816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:44.531 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:44.531 00:30:44.531 Run status group 0 (all jobs): 00:30:44.531 READ: bw=61.3MiB/s (64.3MB/s), 15.3MiB/s-15.3MiB/s (16.1MB/s-16.1MB/s), io=307MiB (322MB), run=5001-5003msec 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:44.790 ************************************ 00:30:44.790 END TEST fio_dif_rand_params 00:30:44.790 ************************************ 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.790 00:30:44.790 real 0m23.527s 00:30:44.790 user 2m6.588s 00:30:44.790 sys 0m4.949s 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:44.790 13:29:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:44.790 13:29:41 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:44.790 13:29:41 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:44.790 13:29:41 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:44.790 13:29:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:44.790 ************************************ 00:30:44.790 START TEST fio_dif_digest 00:30:44.790 ************************************ 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:44.790 bdev_null0 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:44.790 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:44.791 [2024-07-15 13:29:41.501080] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:44.791 { 00:30:44.791 "params": { 00:30:44.791 "name": "Nvme$subsystem", 00:30:44.791 "trtype": "$TEST_TRANSPORT", 00:30:44.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:44.791 "adrfam": "ipv4", 00:30:44.791 "trsvcid": "$NVMF_PORT", 00:30:44.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:44.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:44.791 "hdgst": ${hdgst:-false}, 00:30:44.791 "ddgst": ${ddgst:-false} 00:30:44.791 }, 00:30:44.791 "method": "bdev_nvme_attach_controller" 00:30:44.791 } 00:30:44.791 EOF 00:30:44.791 )") 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:44.791 13:29:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:44.791 "params": { 00:30:44.791 "name": "Nvme0", 00:30:44.791 "trtype": "tcp", 00:30:44.791 "traddr": "10.0.0.2", 00:30:44.791 "adrfam": "ipv4", 00:30:44.791 "trsvcid": "4420", 00:30:44.791 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:44.791 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:44.791 "hdgst": true, 00:30:44.791 "ddgst": true 00:30:44.791 }, 00:30:44.791 "method": "bdev_nvme_attach_controller" 00:30:44.791 }' 00:30:45.049 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:45.049 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:45.049 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:45.049 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:45.049 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:45.049 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:45.049 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:45.049 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:45.049 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:45.049 13:29:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:45.049 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:45.049 ... 00:30:45.049 fio-3.35 00:30:45.049 Starting 3 threads 00:30:57.251 00:30:57.251 filename0: (groupid=0, jobs=1): err= 0: pid=117001: Mon Jul 15 13:29:52 2024 00:30:57.251 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(284MiB/10043msec) 00:30:57.251 slat (usec): min=6, max=187, avg=11.71, stdev= 5.34 00:30:57.251 clat (usec): min=9906, max=47922, avg=13225.92, stdev=1349.86 00:30:57.251 lat (usec): min=9920, max=47934, avg=13237.64, stdev=1349.65 00:30:57.251 clat percentiles (usec): 00:30:57.251 | 1.00th=[11338], 5.00th=[11731], 10.00th=[11994], 20.00th=[12518], 00:30:57.251 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13173], 60.00th=[13435], 00:30:57.251 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14353], 95.00th=[14746], 00:30:57.251 | 99.00th=[15270], 99.50th=[15533], 99.90th=[16188], 99.95th=[47973], 00:30:57.251 | 99.99th=[47973] 00:30:57.251 bw ( KiB/s): min=28416, max=30012, per=34.99%, avg=29059.00, stdev=465.02, samples=20 00:30:57.251 iops : min= 222, max= 234, avg=227.00, stdev= 3.58, samples=20 00:30:57.251 lat (msec) : 10=0.04%, 20=99.87%, 50=0.09% 00:30:57.251 cpu : usr=92.72%, sys=6.00%, ctx=18, majf=0, minf=0 00:30:57.251 IO depths : 1=4.5%, 2=95.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:57.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.251 issued rwts: total=2272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.251 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:57.251 filename0: (groupid=0, jobs=1): err= 0: pid=117002: Mon Jul 15 13:29:52 2024 00:30:57.251 read: IOPS=174, BW=21.8MiB/s (22.8MB/s)(218MiB/10002msec) 00:30:57.251 slat (nsec): min=7148, max=38988, avg=12379.30, stdev=3383.01 00:30:57.251 clat (usec): min=10836, max=20073, avg=17218.62, stdev=848.13 00:30:57.251 lat (usec): min=10848, max=20086, avg=17231.00, stdev=848.58 00:30:57.251 clat percentiles (usec): 00:30:57.251 | 1.00th=[15401], 5.00th=[15926], 10.00th=[16188], 20.00th=[16581], 00:30:57.251 | 30.00th=[16712], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:30:57.251 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18220], 95.00th=[18744], 00:30:57.251 | 99.00th=[19268], 99.50th=[19530], 99.90th=[20055], 99.95th=[20055], 00:30:57.251 | 99.99th=[20055] 00:30:57.251 bw ( KiB/s): min=21504, max=22784, per=26.79%, avg=22247.37, stdev=282.08, samples=19 00:30:57.251 iops : min= 168, max= 178, avg=173.79, stdev= 2.20, samples=19 00:30:57.251 lat (msec) : 20=99.94%, 50=0.06% 00:30:57.251 cpu : usr=93.42%, sys=5.46%, ctx=6, majf=0, minf=0 00:30:57.251 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:57.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.251 issued rwts: total=1741,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.251 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:57.251 filename0: (groupid=0, jobs=1): err= 0: pid=117003: Mon Jul 15 13:29:52 2024 00:30:57.251 read: IOPS=250, BW=31.3MiB/s (32.8MB/s)(313MiB/10005msec) 00:30:57.251 slat (nsec): min=7257, max=39439, avg=12078.91, stdev=3340.51 00:30:57.251 clat (usec): min=8991, max=14621, avg=11981.45, stdev=640.72 00:30:57.251 lat (usec): min=9003, max=14633, avg=11993.53, stdev=640.91 00:30:57.251 clat percentiles (usec): 00:30:57.251 | 1.00th=[10421], 5.00th=[10945], 10.00th=[11207], 20.00th=[11469], 00:30:57.251 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:30:57.251 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12780], 95.00th=[13042], 00:30:57.251 | 99.00th=[13566], 99.50th=[13829], 99.90th=[13960], 99.95th=[14222], 00:30:57.251 | 99.99th=[14615] 00:30:57.251 bw ( KiB/s): min=31488, max=33024, per=38.52%, avg=31986.53, stdev=463.27, samples=19 00:30:57.251 iops : min= 246, max= 258, avg=249.89, stdev= 3.62, samples=19 00:30:57.251 lat (msec) : 10=0.12%, 20=99.88% 00:30:57.251 cpu : usr=92.43%, sys=6.21%, ctx=21, majf=0, minf=0 00:30:57.251 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:57.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.251 issued rwts: total=2502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.251 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:57.251 00:30:57.251 Run status group 0 (all jobs): 00:30:57.251 READ: bw=81.1MiB/s (85.0MB/s), 21.8MiB/s-31.3MiB/s (22.8MB/s-32.8MB/s), io=814MiB (854MB), run=10002-10043msec 00:30:57.251 13:29:52 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:57.251 13:29:52 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:57.251 13:29:52 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:57.251 13:29:52 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:57.251 13:29:52 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:57.251 13:29:52 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:57.251 13:29:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.251 13:29:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:57.251 13:29:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.251 13:29:52 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:57.251 13:29:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.251 13:29:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:57.251 ************************************ 00:30:57.251 END TEST fio_dif_digest 00:30:57.251 ************************************ 00:30:57.251 13:29:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.251 00:30:57.251 real 0m10.988s 00:30:57.251 user 0m28.539s 00:30:57.251 sys 0m2.037s 00:30:57.251 13:29:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:57.251 13:29:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:57.251 13:29:52 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:57.251 13:29:52 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:57.252 13:29:52 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:57.252 13:29:52 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:30:57.252 13:29:52 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:57.252 13:29:52 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:30:57.252 13:29:52 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:57.252 13:29:52 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:57.252 rmmod nvme_tcp 00:30:57.252 rmmod nvme_fabrics 00:30:57.252 rmmod nvme_keyring 00:30:57.252 13:29:52 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:57.252 13:29:52 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:30:57.252 13:29:52 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:30:57.252 13:29:52 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 116250 ']' 00:30:57.252 13:29:52 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 116250 00:30:57.252 13:29:52 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 116250 ']' 00:30:57.252 13:29:52 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 116250 00:30:57.252 13:29:52 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:30:57.252 13:29:52 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:57.252 13:29:52 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 116250 00:30:57.252 killing process with pid 116250 00:30:57.252 13:29:52 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:57.252 13:29:52 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:57.252 13:29:52 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 116250' 00:30:57.252 13:29:52 nvmf_dif -- common/autotest_common.sh@965 -- # kill 116250 00:30:57.252 13:29:52 nvmf_dif -- common/autotest_common.sh@970 -- # wait 116250 00:30:57.252 13:29:52 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:57.252 13:29:52 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:57.252 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:57.252 Waiting for block devices as requested 00:30:57.252 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:57.252 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:57.252 13:29:53 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:57.252 13:29:53 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:57.252 13:29:53 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:57.252 13:29:53 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:57.252 13:29:53 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.252 13:29:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:57.252 13:29:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.252 13:29:53 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:57.252 00:30:57.252 real 0m59.595s 00:30:57.252 user 3m51.739s 00:30:57.252 sys 0m14.501s 00:30:57.252 13:29:53 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:57.252 13:29:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:57.252 ************************************ 00:30:57.252 END TEST nvmf_dif 00:30:57.252 ************************************ 00:30:57.252 13:29:53 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:57.252 13:29:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:57.252 13:29:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:57.252 13:29:53 -- common/autotest_common.sh@10 -- # set +x 00:30:57.252 ************************************ 00:30:57.252 START TEST nvmf_abort_qd_sizes 00:30:57.252 ************************************ 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:57.252 * Looking for test storage... 00:30:57.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:30:57.252 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:57.253 Cannot find device "nvmf_tgt_br" 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:57.253 Cannot find device "nvmf_tgt_br2" 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:57.253 Cannot find device "nvmf_tgt_br" 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:57.253 Cannot find device "nvmf_tgt_br2" 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:57.253 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:57.253 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:57.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:57.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:30:57.253 00:30:57.253 --- 10.0.0.2 ping statistics --- 00:30:57.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.253 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:30:57.253 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:57.253 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:57.253 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:30:57.253 00:30:57.253 --- 10.0.0.3 ping statistics --- 00:30:57.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.254 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:30:57.254 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:57.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:57.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:30:57.254 00:30:57.254 --- 10.0.0.1 ping statistics --- 00:30:57.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.254 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:30:57.254 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:57.254 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:30:57.254 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:57.254 13:29:53 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:57.820 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:58.078 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:58.078 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:58.078 13:29:54 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:58.078 13:29:54 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:58.078 13:29:54 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:58.078 13:29:54 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:58.078 13:29:54 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:58.078 13:29:54 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:58.078 13:29:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:58.078 13:29:54 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:58.078 13:29:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:58.078 13:29:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:58.078 13:29:54 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=117589 00:30:58.078 13:29:54 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 117589 00:30:58.078 13:29:54 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:58.078 13:29:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 117589 ']' 00:30:58.078 13:29:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.078 13:29:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:58.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.078 13:29:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.078 13:29:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:58.078 13:29:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:58.078 [2024-07-15 13:29:54.775171] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:58.078 [2024-07-15 13:29:54.775321] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:58.336 [2024-07-15 13:29:54.919156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:58.336 [2024-07-15 13:29:55.024234] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:58.336 [2024-07-15 13:29:55.024317] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:58.336 [2024-07-15 13:29:55.024343] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:58.336 [2024-07-15 13:29:55.024353] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:58.336 [2024-07-15 13:29:55.024362] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:58.336 [2024-07-15 13:29:55.024821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.336 [2024-07-15 13:29:55.024993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:58.336 [2024-07-15 13:29:55.025110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:58.336 [2024-07-15 13:29:55.025115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:59.272 13:29:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:59.273 13:29:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:59.273 13:29:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:59.273 ************************************ 00:30:59.273 START TEST spdk_target_abort 00:30:59.273 ************************************ 00:30:59.273 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:30:59.273 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:59.273 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:30:59.273 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.273 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:59.273 spdk_targetn1 00:30:59.273 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.273 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:59.273 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.273 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:59.273 [2024-07-15 13:29:55.965552] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:59.273 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.273 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:59.273 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.273 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:59.273 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.273 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:59.273 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.273 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:59.273 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.273 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:59.273 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.273 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:59.273 [2024-07-15 13:29:56.005775] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:59.529 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.529 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:59.530 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:59.530 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:59.530 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:59.530 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:59.530 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:59.530 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:59.530 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:59.530 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:59.530 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:59.530 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:59.530 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:59.530 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:59.530 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:59.530 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:59.530 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:59.530 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:59.530 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:59.530 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:59.530 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:59.530 13:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:02.812 Initializing NVMe Controllers 00:31:02.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:02.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:02.812 Initialization complete. Launching workers. 00:31:02.812 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10602, failed: 0 00:31:02.812 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1093, failed to submit 9509 00:31:02.812 success 784, unsuccess 309, failed 0 00:31:02.813 13:29:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:02.813 13:29:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:06.125 Initializing NVMe Controllers 00:31:06.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:06.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:06.125 Initialization complete. Launching workers. 00:31:06.125 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6023, failed: 0 00:31:06.125 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1291, failed to submit 4732 00:31:06.125 success 247, unsuccess 1044, failed 0 00:31:06.125 13:30:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:06.125 13:30:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:09.427 Initializing NVMe Controllers 00:31:09.427 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:09.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:09.427 Initialization complete. Launching workers. 00:31:09.427 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30466, failed: 0 00:31:09.427 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2602, failed to submit 27864 00:31:09.427 success 458, unsuccess 2144, failed 0 00:31:09.427 13:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:09.427 13:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.427 13:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:09.427 13:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.427 13:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:09.427 13:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.427 13:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:09.685 13:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.685 13:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 117589 00:31:09.685 13:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 117589 ']' 00:31:09.685 13:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 117589 00:31:09.685 13:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:31:09.685 13:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:09.685 13:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 117589 00:31:09.685 killing process with pid 117589 00:31:09.685 13:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:09.685 13:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:09.685 13:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 117589' 00:31:09.685 13:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 117589 00:31:09.685 13:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 117589 00:31:09.943 ************************************ 00:31:09.943 END TEST spdk_target_abort 00:31:09.943 ************************************ 00:31:09.943 00:31:09.943 real 0m10.542s 00:31:09.943 user 0m43.302s 00:31:09.943 sys 0m1.726s 00:31:09.943 13:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:09.943 13:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:09.943 13:30:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:09.943 13:30:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:09.943 13:30:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:09.943 13:30:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:09.943 ************************************ 00:31:09.943 START TEST kernel_target_abort 00:31:09.943 ************************************ 00:31:09.943 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:31:09.943 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:09.943 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:09.943 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:09.943 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:09.943 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:09.944 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:09.944 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:09.944 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:09.944 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:09.944 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:09.944 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:09.944 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:09.944 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:09.944 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:09.944 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:09.944 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:09.944 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:09.944 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:09.944 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:09.944 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:09.944 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:09.944 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:10.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:10.202 Waiting for block devices as requested 00:31:10.202 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:10.461 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:31:10.461 No valid GPT data, bailing 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:31:10.461 No valid GPT data, bailing 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:31:10.461 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:31:10.462 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:31:10.462 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:31:10.462 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:31:10.462 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:31:10.462 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:31:10.721 No valid GPT data, bailing 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:31:10.721 No valid GPT data, bailing 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 --hostid=c8b8b44b-387e-43b9-a950-dc0d98528a02 -a 10.0.0.1 -t tcp -s 4420 00:31:10.721 00:31:10.721 Discovery Log Number of Records 2, Generation counter 2 00:31:10.721 =====Discovery Log Entry 0====== 00:31:10.721 trtype: tcp 00:31:10.721 adrfam: ipv4 00:31:10.721 subtype: current discovery subsystem 00:31:10.721 treq: not specified, sq flow control disable supported 00:31:10.721 portid: 1 00:31:10.721 trsvcid: 4420 00:31:10.721 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:10.721 traddr: 10.0.0.1 00:31:10.721 eflags: none 00:31:10.721 sectype: none 00:31:10.721 =====Discovery Log Entry 1====== 00:31:10.721 trtype: tcp 00:31:10.721 adrfam: ipv4 00:31:10.721 subtype: nvme subsystem 00:31:10.721 treq: not specified, sq flow control disable supported 00:31:10.721 portid: 1 00:31:10.721 trsvcid: 4420 00:31:10.721 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:10.721 traddr: 10.0.0.1 00:31:10.721 eflags: none 00:31:10.721 sectype: none 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:10.721 13:30:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:14.006 Initializing NVMe Controllers 00:31:14.006 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:14.006 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:14.006 Initialization complete. Launching workers. 00:31:14.006 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30739, failed: 0 00:31:14.006 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30739, failed to submit 0 00:31:14.006 success 0, unsuccess 30739, failed 0 00:31:14.006 13:30:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:14.006 13:30:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:17.331 Initializing NVMe Controllers 00:31:17.331 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:17.331 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:17.331 Initialization complete. Launching workers. 00:31:17.331 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67612, failed: 0 00:31:17.331 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28874, failed to submit 38738 00:31:17.331 success 0, unsuccess 28874, failed 0 00:31:17.331 13:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:17.331 13:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:20.616 Initializing NVMe Controllers 00:31:20.616 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:20.616 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:20.616 Initialization complete. Launching workers. 00:31:20.616 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79078, failed: 0 00:31:20.616 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19748, failed to submit 59330 00:31:20.616 success 0, unsuccess 19748, failed 0 00:31:20.616 13:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:20.616 13:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:20.616 13:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:20.616 13:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:20.616 13:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:20.616 13:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:20.616 13:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:20.616 13:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:20.616 13:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:20.616 13:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:20.875 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:21.810 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:22.068 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:22.068 00:31:22.068 real 0m12.148s 00:31:22.068 user 0m6.004s 00:31:22.068 sys 0m3.579s 00:31:22.068 ************************************ 00:31:22.068 13:30:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:22.068 13:30:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:22.068 END TEST kernel_target_abort 00:31:22.068 ************************************ 00:31:22.068 13:30:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:22.068 13:30:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:22.068 13:30:18 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:22.068 13:30:18 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:31:22.068 13:30:18 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:22.068 13:30:18 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:31:22.068 13:30:18 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:22.068 13:30:18 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:22.068 rmmod nvme_tcp 00:31:22.068 rmmod nvme_fabrics 00:31:22.068 rmmod nvme_keyring 00:31:22.068 13:30:18 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:22.068 13:30:18 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:31:22.068 13:30:18 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:31:22.068 13:30:18 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 117589 ']' 00:31:22.068 13:30:18 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 117589 00:31:22.068 13:30:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 117589 ']' 00:31:22.068 13:30:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 117589 00:31:22.068 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (117589) - No such process 00:31:22.068 Process with pid 117589 is not found 00:31:22.068 13:30:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 117589 is not found' 00:31:22.068 13:30:18 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:22.068 13:30:18 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:22.665 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:22.665 Waiting for block devices as requested 00:31:22.665 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:22.665 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:22.665 13:30:19 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:22.665 13:30:19 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:22.665 13:30:19 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:22.665 13:30:19 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:22.665 13:30:19 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.665 13:30:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:22.665 13:30:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.665 13:30:19 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:22.665 00:31:22.665 real 0m25.934s 00:31:22.665 user 0m50.473s 00:31:22.665 sys 0m6.667s 00:31:22.665 13:30:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:22.665 13:30:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:22.665 ************************************ 00:31:22.665 END TEST nvmf_abort_qd_sizes 00:31:22.665 ************************************ 00:31:22.924 13:30:19 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:31:22.924 13:30:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:22.924 13:30:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:22.924 13:30:19 -- common/autotest_common.sh@10 -- # set +x 00:31:22.924 ************************************ 00:31:22.924 START TEST keyring_file 00:31:22.924 ************************************ 00:31:22.924 13:30:19 keyring_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:31:22.924 * Looking for test storage... 00:31:22.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:31:22.924 13:30:19 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:31:22.924 13:30:19 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:22.924 13:30:19 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.924 13:30:19 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.924 13:30:19 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.924 13:30:19 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.924 13:30:19 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.924 13:30:19 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.924 13:30:19 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:22.924 13:30:19 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@47 -- # : 0 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:22.924 13:30:19 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:22.924 13:30:19 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:22.924 13:30:19 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:22.924 13:30:19 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:22.924 13:30:19 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:22.924 13:30:19 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:22.924 13:30:19 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:22.924 13:30:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:22.924 13:30:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:22.924 13:30:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:22.924 13:30:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:22.924 13:30:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:22.924 13:30:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yiLYfHwZd4 00:31:22.924 13:30:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:22.924 13:30:19 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:22.925 13:30:19 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:22.925 13:30:19 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:22.925 13:30:19 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:22.925 13:30:19 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:22.925 13:30:19 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:22.925 13:30:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yiLYfHwZd4 00:31:22.925 13:30:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yiLYfHwZd4 00:31:22.925 13:30:19 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.yiLYfHwZd4 00:31:22.925 13:30:19 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:22.925 13:30:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:22.925 13:30:19 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:22.925 13:30:19 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:22.925 13:30:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:22.925 13:30:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:22.925 13:30:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.xGLpxrPEMk 00:31:22.925 13:30:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:22.925 13:30:19 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:22.925 13:30:19 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:22.925 13:30:19 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:22.925 13:30:19 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:22.925 13:30:19 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:22.925 13:30:19 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:23.183 13:30:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xGLpxrPEMk 00:31:23.183 13:30:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.xGLpxrPEMk 00:31:23.183 13:30:19 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.xGLpxrPEMk 00:31:23.183 13:30:19 keyring_file -- keyring/file.sh@30 -- # tgtpid=118459 00:31:23.183 13:30:19 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:23.183 13:30:19 keyring_file -- keyring/file.sh@32 -- # waitforlisten 118459 00:31:23.183 13:30:19 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 118459 ']' 00:31:23.183 13:30:19 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.183 13:30:19 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:23.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.183 13:30:19 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.183 13:30:19 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:23.183 13:30:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:23.183 [2024-07-15 13:30:19.736372] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:23.183 [2024-07-15 13:30:19.736495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118459 ] 00:31:23.183 [2024-07-15 13:30:19.877568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.440 [2024-07-15 13:30:19.966847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.006 13:30:20 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:24.006 13:30:20 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:31:24.006 13:30:20 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:24.006 13:30:20 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.006 13:30:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:24.006 [2024-07-15 13:30:20.683948] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:24.006 null0 00:31:24.006 [2024-07-15 13:30:20.715918] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:24.006 [2024-07-15 13:30:20.716134] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:24.006 [2024-07-15 13:30:20.723917] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:24.006 13:30:20 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.006 13:30:20 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:24.006 13:30:20 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:24.006 13:30:20 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:24.006 13:30:20 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:24.006 13:30:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:24.006 13:30:20 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:24.006 13:30:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:24.006 13:30:20 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:24.006 13:30:20 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.006 13:30:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:24.007 [2024-07-15 13:30:20.739891] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:24.007 2024/07/15 13:30:20 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:31:24.007 request: 00:31:24.007 { 00:31:24.007 "method": "nvmf_subsystem_add_listener", 00:31:24.007 "params": { 00:31:24.007 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:24.265 "secure_channel": false, 00:31:24.265 "listen_address": { 00:31:24.265 "trtype": "tcp", 00:31:24.265 "traddr": "127.0.0.1", 00:31:24.265 "trsvcid": "4420" 00:31:24.265 } 00:31:24.265 } 00:31:24.265 } 00:31:24.265 Got JSON-RPC error response 00:31:24.265 GoRPCClient: error on JSON-RPC call 00:31:24.265 13:30:20 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:24.265 13:30:20 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:24.265 13:30:20 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:24.265 13:30:20 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:24.265 13:30:20 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:24.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:24.265 13:30:20 keyring_file -- keyring/file.sh@46 -- # bperfpid=118494 00:31:24.265 13:30:20 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:24.265 13:30:20 keyring_file -- keyring/file.sh@48 -- # waitforlisten 118494 /var/tmp/bperf.sock 00:31:24.265 13:30:20 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 118494 ']' 00:31:24.265 13:30:20 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:24.265 13:30:20 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:24.265 13:30:20 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:24.265 13:30:20 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:24.265 13:30:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:24.265 [2024-07-15 13:30:20.802810] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:24.265 [2024-07-15 13:30:20.803110] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118494 ] 00:31:24.265 [2024-07-15 13:30:20.940671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.546 [2024-07-15 13:30:21.015527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:25.112 13:30:21 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:25.112 13:30:21 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:31:25.112 13:30:21 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yiLYfHwZd4 00:31:25.112 13:30:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yiLYfHwZd4 00:31:25.369 13:30:21 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.xGLpxrPEMk 00:31:25.369 13:30:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.xGLpxrPEMk 00:31:25.627 13:30:22 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:31:25.627 13:30:22 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:31:25.628 13:30:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:25.628 13:30:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:25.628 13:30:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:25.885 13:30:22 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.yiLYfHwZd4 == \/\t\m\p\/\t\m\p\.\y\i\L\Y\f\H\w\Z\d\4 ]] 00:31:25.885 13:30:22 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:31:25.885 13:30:22 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:31:25.885 13:30:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:25.885 13:30:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:25.885 13:30:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:26.143 13:30:22 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.xGLpxrPEMk == \/\t\m\p\/\t\m\p\.\x\G\L\p\x\r\P\E\M\k ]] 00:31:26.143 13:30:22 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:31:26.143 13:30:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:26.143 13:30:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:26.143 13:30:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:26.143 13:30:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:26.143 13:30:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:26.403 13:30:22 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:31:26.403 13:30:22 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:31:26.403 13:30:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:26.403 13:30:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:26.403 13:30:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:26.403 13:30:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:26.403 13:30:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:26.662 13:30:23 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:31:26.662 13:30:23 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:26.662 13:30:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:26.920 [2024-07-15 13:30:23.415600] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:26.920 nvme0n1 00:31:26.920 13:30:23 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:31:26.920 13:30:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:26.920 13:30:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:26.920 13:30:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:26.920 13:30:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:26.921 13:30:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:27.178 13:30:23 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:31:27.178 13:30:23 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:31:27.178 13:30:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:27.178 13:30:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:27.178 13:30:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:27.178 13:30:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:27.178 13:30:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:27.436 13:30:24 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:31:27.436 13:30:24 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:27.436 Running I/O for 1 seconds... 00:31:28.810 00:31:28.810 Latency(us) 00:31:28.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:28.810 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:28.810 nvme0n1 : 1.00 12624.70 49.32 0.00 0.00 10108.23 4796.04 17754.30 00:31:28.810 =================================================================================================================== 00:31:28.810 Total : 12624.70 49.32 0.00 0.00 10108.23 4796.04 17754.30 00:31:28.810 0 00:31:28.810 13:30:25 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:28.810 13:30:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:28.810 13:30:25 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:31:28.810 13:30:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:28.810 13:30:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:28.810 13:30:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:28.811 13:30:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:28.811 13:30:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:29.069 13:30:25 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:31:29.069 13:30:25 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:31:29.069 13:30:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:29.069 13:30:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:29.069 13:30:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:29.069 13:30:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:29.069 13:30:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:29.328 13:30:25 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:29.328 13:30:25 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:29.328 13:30:25 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:29.328 13:30:25 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:29.328 13:30:25 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:29.328 13:30:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:29.328 13:30:25 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:29.328 13:30:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:29.328 13:30:25 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:29.328 13:30:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:29.586 [2024-07-15 13:30:26.200113] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:29.586 [2024-07-15 13:30:26.200754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f6800 (107): Transport endpoint is not connected 00:31:29.586 [2024-07-15 13:30:26.201742] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f6800 (9): Bad file descriptor 00:31:29.586 [2024-07-15 13:30:26.202738] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:29.586 [2024-07-15 13:30:26.202779] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:29.586 [2024-07-15 13:30:26.202814] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:29.586 2024/07/15 13:30:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:29.586 request: 00:31:29.586 { 00:31:29.586 "method": "bdev_nvme_attach_controller", 00:31:29.586 "params": { 00:31:29.586 "name": "nvme0", 00:31:29.586 "trtype": "tcp", 00:31:29.586 "traddr": "127.0.0.1", 00:31:29.586 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:29.586 "adrfam": "ipv4", 00:31:29.586 "trsvcid": "4420", 00:31:29.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:29.586 "psk": "key1" 00:31:29.586 } 00:31:29.586 } 00:31:29.586 Got JSON-RPC error response 00:31:29.586 GoRPCClient: error on JSON-RPC call 00:31:29.586 13:30:26 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:29.586 13:30:26 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:29.586 13:30:26 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:29.586 13:30:26 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:29.586 13:30:26 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:31:29.586 13:30:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:29.586 13:30:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:29.586 13:30:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:29.586 13:30:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:29.586 13:30:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:29.845 13:30:26 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:31:29.845 13:30:26 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:31:29.845 13:30:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:29.845 13:30:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:29.845 13:30:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:29.845 13:30:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:29.845 13:30:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:30.103 13:30:26 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:30.103 13:30:26 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:31:30.103 13:30:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:30.362 13:30:26 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:31:30.362 13:30:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:30.620 13:30:27 keyring_file -- keyring/file.sh@77 -- # jq length 00:31:30.620 13:30:27 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:31:30.620 13:30:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:30.879 13:30:27 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:31:30.879 13:30:27 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.yiLYfHwZd4 00:31:30.879 13:30:27 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.yiLYfHwZd4 00:31:30.879 13:30:27 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:30.879 13:30:27 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.yiLYfHwZd4 00:31:30.879 13:30:27 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:30.879 13:30:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:30.879 13:30:27 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:30.879 13:30:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:30.879 13:30:27 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yiLYfHwZd4 00:31:30.879 13:30:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yiLYfHwZd4 00:31:30.879 [2024-07-15 13:30:27.585104] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.yiLYfHwZd4': 0100660 00:31:30.879 [2024-07-15 13:30:27.585161] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:30.879 2024/07/15 13:30:27 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.yiLYfHwZd4], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:31:30.879 request: 00:31:30.879 { 00:31:30.879 "method": "keyring_file_add_key", 00:31:30.879 "params": { 00:31:30.879 "name": "key0", 00:31:30.879 "path": "/tmp/tmp.yiLYfHwZd4" 00:31:30.879 } 00:31:30.879 } 00:31:30.879 Got JSON-RPC error response 00:31:30.879 GoRPCClient: error on JSON-RPC call 00:31:30.879 13:30:27 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:30.879 13:30:27 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:30.879 13:30:27 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:30.879 13:30:27 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:30.879 13:30:27 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.yiLYfHwZd4 00:31:30.879 13:30:27 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yiLYfHwZd4 00:31:30.879 13:30:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yiLYfHwZd4 00:31:31.138 13:30:27 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.yiLYfHwZd4 00:31:31.396 13:30:27 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:31:31.396 13:30:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:31.396 13:30:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:31.396 13:30:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:31.396 13:30:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:31.396 13:30:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:31.396 13:30:28 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:31:31.396 13:30:28 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:31.396 13:30:28 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:31.396 13:30:28 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:31.396 13:30:28 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:31.396 13:30:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:31.396 13:30:28 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:31.396 13:30:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:31.396 13:30:28 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:31.396 13:30:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:31.655 [2024-07-15 13:30:28.305295] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.yiLYfHwZd4': No such file or directory 00:31:31.655 [2024-07-15 13:30:28.305339] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:31.655 [2024-07-15 13:30:28.305365] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:31.655 [2024-07-15 13:30:28.305374] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:31.655 [2024-07-15 13:30:28.305383] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:31.655 2024/07/15 13:30:28 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:31:31.655 request: 00:31:31.655 { 00:31:31.655 "method": "bdev_nvme_attach_controller", 00:31:31.655 "params": { 00:31:31.655 "name": "nvme0", 00:31:31.655 "trtype": "tcp", 00:31:31.655 "traddr": "127.0.0.1", 00:31:31.656 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:31.656 "adrfam": "ipv4", 00:31:31.656 "trsvcid": "4420", 00:31:31.656 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:31.656 "psk": "key0" 00:31:31.656 } 00:31:31.656 } 00:31:31.656 Got JSON-RPC error response 00:31:31.656 GoRPCClient: error on JSON-RPC call 00:31:31.656 13:30:28 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:31.656 13:30:28 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:31.656 13:30:28 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:31.656 13:30:28 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:31.656 13:30:28 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:31:31.656 13:30:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:31.915 13:30:28 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:31.915 13:30:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:31.915 13:30:28 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:31.915 13:30:28 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:31.915 13:30:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:31.915 13:30:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:31.915 13:30:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4O0X22PX7U 00:31:31.915 13:30:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:31.915 13:30:28 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:31.915 13:30:28 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:31.915 13:30:28 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:31.915 13:30:28 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:31.915 13:30:28 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:31.915 13:30:28 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:31.915 13:30:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4O0X22PX7U 00:31:31.915 13:30:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4O0X22PX7U 00:31:31.915 13:30:28 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.4O0X22PX7U 00:31:31.915 13:30:28 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4O0X22PX7U 00:31:31.915 13:30:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4O0X22PX7U 00:31:32.173 13:30:28 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:32.173 13:30:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:32.431 nvme0n1 00:31:32.431 13:30:29 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:31:32.431 13:30:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:32.431 13:30:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:32.431 13:30:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:32.432 13:30:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:32.432 13:30:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:32.712 13:30:29 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:31:32.712 13:30:29 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:31:32.712 13:30:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:32.970 13:30:29 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:31:32.970 13:30:29 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:31:32.970 13:30:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:32.970 13:30:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:32.970 13:30:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:33.229 13:30:29 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:31:33.229 13:30:29 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:31:33.229 13:30:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:33.229 13:30:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:33.229 13:30:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:33.229 13:30:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:33.229 13:30:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:33.488 13:30:30 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:31:33.488 13:30:30 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:33.488 13:30:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:33.746 13:30:30 keyring_file -- keyring/file.sh@104 -- # jq length 00:31:33.746 13:30:30 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:31:33.746 13:30:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:34.005 13:30:30 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:31:34.005 13:30:30 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4O0X22PX7U 00:31:34.005 13:30:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4O0X22PX7U 00:31:34.263 13:30:30 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.xGLpxrPEMk 00:31:34.263 13:30:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.xGLpxrPEMk 00:31:34.521 13:30:31 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:34.521 13:30:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:34.779 nvme0n1 00:31:34.779 13:30:31 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:31:34.779 13:30:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:35.038 13:30:31 keyring_file -- keyring/file.sh@112 -- # config='{ 00:31:35.038 "subsystems": [ 00:31:35.038 { 00:31:35.038 "subsystem": "keyring", 00:31:35.038 "config": [ 00:31:35.038 { 00:31:35.038 "method": "keyring_file_add_key", 00:31:35.038 "params": { 00:31:35.038 "name": "key0", 00:31:35.038 "path": "/tmp/tmp.4O0X22PX7U" 00:31:35.038 } 00:31:35.038 }, 00:31:35.038 { 00:31:35.038 "method": "keyring_file_add_key", 00:31:35.038 "params": { 00:31:35.038 "name": "key1", 00:31:35.038 "path": "/tmp/tmp.xGLpxrPEMk" 00:31:35.038 } 00:31:35.038 } 00:31:35.038 ] 00:31:35.038 }, 00:31:35.038 { 00:31:35.038 "subsystem": "iobuf", 00:31:35.038 "config": [ 00:31:35.038 { 00:31:35.038 "method": "iobuf_set_options", 00:31:35.038 "params": { 00:31:35.038 "large_bufsize": 135168, 00:31:35.038 "large_pool_count": 1024, 00:31:35.038 "small_bufsize": 8192, 00:31:35.038 "small_pool_count": 8192 00:31:35.038 } 00:31:35.038 } 00:31:35.038 ] 00:31:35.038 }, 00:31:35.038 { 00:31:35.038 "subsystem": "sock", 00:31:35.038 "config": [ 00:31:35.038 { 00:31:35.038 "method": "sock_set_default_impl", 00:31:35.038 "params": { 00:31:35.038 "impl_name": "posix" 00:31:35.038 } 00:31:35.038 }, 00:31:35.038 { 00:31:35.038 "method": "sock_impl_set_options", 00:31:35.038 "params": { 00:31:35.038 "enable_ktls": false, 00:31:35.038 "enable_placement_id": 0, 00:31:35.038 "enable_quickack": false, 00:31:35.038 "enable_recv_pipe": true, 00:31:35.038 "enable_zerocopy_send_client": false, 00:31:35.038 "enable_zerocopy_send_server": true, 00:31:35.038 "impl_name": "ssl", 00:31:35.038 "recv_buf_size": 4096, 00:31:35.038 "send_buf_size": 4096, 00:31:35.038 "tls_version": 0, 00:31:35.038 "zerocopy_threshold": 0 00:31:35.038 } 00:31:35.038 }, 00:31:35.038 { 00:31:35.038 "method": "sock_impl_set_options", 00:31:35.038 "params": { 00:31:35.038 "enable_ktls": false, 00:31:35.038 "enable_placement_id": 0, 00:31:35.038 "enable_quickack": false, 00:31:35.038 "enable_recv_pipe": true, 00:31:35.038 "enable_zerocopy_send_client": false, 00:31:35.038 "enable_zerocopy_send_server": true, 00:31:35.038 "impl_name": "posix", 00:31:35.038 "recv_buf_size": 2097152, 00:31:35.038 "send_buf_size": 2097152, 00:31:35.038 "tls_version": 0, 00:31:35.038 "zerocopy_threshold": 0 00:31:35.038 } 00:31:35.038 } 00:31:35.038 ] 00:31:35.038 }, 00:31:35.038 { 00:31:35.038 "subsystem": "vmd", 00:31:35.038 "config": [] 00:31:35.038 }, 00:31:35.038 { 00:31:35.038 "subsystem": "accel", 00:31:35.038 "config": [ 00:31:35.038 { 00:31:35.038 "method": "accel_set_options", 00:31:35.038 "params": { 00:31:35.038 "buf_count": 2048, 00:31:35.038 "large_cache_size": 16, 00:31:35.038 "sequence_count": 2048, 00:31:35.038 "small_cache_size": 128, 00:31:35.038 "task_count": 2048 00:31:35.038 } 00:31:35.038 } 00:31:35.038 ] 00:31:35.038 }, 00:31:35.038 { 00:31:35.038 "subsystem": "bdev", 00:31:35.038 "config": [ 00:31:35.038 { 00:31:35.038 "method": "bdev_set_options", 00:31:35.038 "params": { 00:31:35.038 "bdev_auto_examine": true, 00:31:35.038 "bdev_io_cache_size": 256, 00:31:35.038 "bdev_io_pool_size": 65535, 00:31:35.038 "iobuf_large_cache_size": 16, 00:31:35.038 "iobuf_small_cache_size": 128 00:31:35.038 } 00:31:35.038 }, 00:31:35.038 { 00:31:35.038 "method": "bdev_raid_set_options", 00:31:35.038 "params": { 00:31:35.038 "process_window_size_kb": 1024 00:31:35.038 } 00:31:35.038 }, 00:31:35.038 { 00:31:35.038 "method": "bdev_iscsi_set_options", 00:31:35.038 "params": { 00:31:35.038 "timeout_sec": 30 00:31:35.038 } 00:31:35.038 }, 00:31:35.038 { 00:31:35.038 "method": "bdev_nvme_set_options", 00:31:35.038 "params": { 00:31:35.038 "action_on_timeout": "none", 00:31:35.038 "allow_accel_sequence": false, 00:31:35.038 "arbitration_burst": 0, 00:31:35.038 "bdev_retry_count": 3, 00:31:35.038 "ctrlr_loss_timeout_sec": 0, 00:31:35.038 "delay_cmd_submit": true, 00:31:35.038 "dhchap_dhgroups": [ 00:31:35.038 "null", 00:31:35.038 "ffdhe2048", 00:31:35.038 "ffdhe3072", 00:31:35.038 "ffdhe4096", 00:31:35.038 "ffdhe6144", 00:31:35.038 "ffdhe8192" 00:31:35.038 ], 00:31:35.038 "dhchap_digests": [ 00:31:35.038 "sha256", 00:31:35.038 "sha384", 00:31:35.038 "sha512" 00:31:35.038 ], 00:31:35.038 "disable_auto_failback": false, 00:31:35.038 "fast_io_fail_timeout_sec": 0, 00:31:35.038 "generate_uuids": false, 00:31:35.038 "high_priority_weight": 0, 00:31:35.038 "io_path_stat": false, 00:31:35.038 "io_queue_requests": 512, 00:31:35.038 "keep_alive_timeout_ms": 10000, 00:31:35.038 "low_priority_weight": 0, 00:31:35.038 "medium_priority_weight": 0, 00:31:35.038 "nvme_adminq_poll_period_us": 10000, 00:31:35.038 "nvme_error_stat": false, 00:31:35.038 "nvme_ioq_poll_period_us": 0, 00:31:35.038 "rdma_cm_event_timeout_ms": 0, 00:31:35.038 "rdma_max_cq_size": 0, 00:31:35.038 "rdma_srq_size": 0, 00:31:35.038 "reconnect_delay_sec": 0, 00:31:35.038 "timeout_admin_us": 0, 00:31:35.038 "timeout_us": 0, 00:31:35.038 "transport_ack_timeout": 0, 00:31:35.038 "transport_retry_count": 4, 00:31:35.038 "transport_tos": 0 00:31:35.038 } 00:31:35.038 }, 00:31:35.038 { 00:31:35.038 "method": "bdev_nvme_attach_controller", 00:31:35.038 "params": { 00:31:35.038 "adrfam": "IPv4", 00:31:35.038 "ctrlr_loss_timeout_sec": 0, 00:31:35.038 "ddgst": false, 00:31:35.038 "fast_io_fail_timeout_sec": 0, 00:31:35.038 "hdgst": false, 00:31:35.038 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:35.038 "name": "nvme0", 00:31:35.038 "prchk_guard": false, 00:31:35.038 "prchk_reftag": false, 00:31:35.038 "psk": "key0", 00:31:35.038 "reconnect_delay_sec": 0, 00:31:35.038 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:35.038 "traddr": "127.0.0.1", 00:31:35.038 "trsvcid": "4420", 00:31:35.038 "trtype": "TCP" 00:31:35.038 } 00:31:35.038 }, 00:31:35.038 { 00:31:35.038 "method": "bdev_nvme_set_hotplug", 00:31:35.038 "params": { 00:31:35.038 "enable": false, 00:31:35.038 "period_us": 100000 00:31:35.038 } 00:31:35.038 }, 00:31:35.038 { 00:31:35.038 "method": "bdev_wait_for_examine" 00:31:35.038 } 00:31:35.038 ] 00:31:35.038 }, 00:31:35.038 { 00:31:35.038 "subsystem": "nbd", 00:31:35.039 "config": [] 00:31:35.039 } 00:31:35.039 ] 00:31:35.039 }' 00:31:35.039 13:30:31 keyring_file -- keyring/file.sh@114 -- # killprocess 118494 00:31:35.039 13:30:31 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 118494 ']' 00:31:35.039 13:30:31 keyring_file -- common/autotest_common.sh@950 -- # kill -0 118494 00:31:35.039 13:30:31 keyring_file -- common/autotest_common.sh@951 -- # uname 00:31:35.039 13:30:31 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:35.039 13:30:31 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 118494 00:31:35.039 13:30:31 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:35.039 13:30:31 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:35.039 killing process with pid 118494 00:31:35.039 13:30:31 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 118494' 00:31:35.039 13:30:31 keyring_file -- common/autotest_common.sh@965 -- # kill 118494 00:31:35.039 Received shutdown signal, test time was about 1.000000 seconds 00:31:35.039 00:31:35.039 Latency(us) 00:31:35.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:35.039 =================================================================================================================== 00:31:35.039 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:35.039 13:30:31 keyring_file -- common/autotest_common.sh@970 -- # wait 118494 00:31:35.297 13:30:31 keyring_file -- keyring/file.sh@117 -- # bperfpid=118967 00:31:35.297 13:30:31 keyring_file -- keyring/file.sh@119 -- # waitforlisten 118967 /var/tmp/bperf.sock 00:31:35.297 13:30:31 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 118967 ']' 00:31:35.297 13:30:31 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:35.297 13:30:31 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:35.297 13:30:31 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:35.297 13:30:31 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:31:35.297 "subsystems": [ 00:31:35.297 { 00:31:35.297 "subsystem": "keyring", 00:31:35.297 "config": [ 00:31:35.297 { 00:31:35.297 "method": "keyring_file_add_key", 00:31:35.297 "params": { 00:31:35.297 "name": "key0", 00:31:35.297 "path": "/tmp/tmp.4O0X22PX7U" 00:31:35.297 } 00:31:35.297 }, 00:31:35.297 { 00:31:35.297 "method": "keyring_file_add_key", 00:31:35.297 "params": { 00:31:35.297 "name": "key1", 00:31:35.297 "path": "/tmp/tmp.xGLpxrPEMk" 00:31:35.297 } 00:31:35.297 } 00:31:35.297 ] 00:31:35.297 }, 00:31:35.297 { 00:31:35.297 "subsystem": "iobuf", 00:31:35.297 "config": [ 00:31:35.297 { 00:31:35.297 "method": "iobuf_set_options", 00:31:35.297 "params": { 00:31:35.297 "large_bufsize": 135168, 00:31:35.297 "large_pool_count": 1024, 00:31:35.297 "small_bufsize": 8192, 00:31:35.297 "small_pool_count": 8192 00:31:35.297 } 00:31:35.297 } 00:31:35.297 ] 00:31:35.297 }, 00:31:35.297 { 00:31:35.297 "subsystem": "sock", 00:31:35.297 "config": [ 00:31:35.297 { 00:31:35.297 "method": "sock_set_default_impl", 00:31:35.297 "params": { 00:31:35.297 "impl_name": "posix" 00:31:35.298 } 00:31:35.298 }, 00:31:35.298 { 00:31:35.298 "method": "sock_impl_set_options", 00:31:35.298 "params": { 00:31:35.298 "enable_ktls": false, 00:31:35.298 "enable_placement_id": 0, 00:31:35.298 "enable_quickack": false, 00:31:35.298 "enable_recv_pipe": true, 00:31:35.298 "enable_zerocopy_send_client": false, 00:31:35.298 "enable_zerocopy_send_server": true, 00:31:35.298 "impl_name": "ssl", 00:31:35.298 "recv_buf_size": 4096, 00:31:35.298 "send_buf_size": 4096, 00:31:35.298 "tls_version": 0, 00:31:35.298 "zerocopy_threshold": 0 00:31:35.298 } 00:31:35.298 }, 00:31:35.298 { 00:31:35.298 "method": "sock_impl_set_options", 00:31:35.298 "params": { 00:31:35.298 "enable_ktls": false, 00:31:35.298 "enable_placement_id": 0, 00:31:35.298 "enable_quickack": false, 00:31:35.298 "enable_recv_pipe": true, 00:31:35.298 "enable_zerocopy_send_client": false, 00:31:35.298 "enable_zerocopy_send_server": true, 00:31:35.298 "impl_name": "posix", 00:31:35.298 "recv_buf_size": 2097152, 00:31:35.298 "send_buf_size": 2097152, 00:31:35.298 "tls_version": 0, 00:31:35.298 "zerocopy_threshold": 0 00:31:35.298 } 00:31:35.298 } 00:31:35.298 ] 00:31:35.298 }, 00:31:35.298 { 00:31:35.298 "subsystem": "vmd", 00:31:35.298 "config": [] 00:31:35.298 }, 00:31:35.298 { 00:31:35.298 "subsystem": "accel", 00:31:35.298 "config": [ 00:31:35.298 { 00:31:35.298 "method": "accel_set_options", 00:31:35.298 "params": { 00:31:35.298 "buf_count": 2048, 00:31:35.298 "large_cache_size": 16, 00:31:35.298 "sequence_count": 2048, 00:31:35.298 "small_cache_size": 128, 00:31:35.298 "task_count": 2048 00:31:35.298 } 00:31:35.298 } 00:31:35.298 ] 00:31:35.298 }, 00:31:35.298 { 00:31:35.298 "subsystem": "bdev", 00:31:35.298 "config": [ 00:31:35.298 { 00:31:35.298 "method": "bdev_set_options", 00:31:35.298 "params": { 00:31:35.298 "bdev_auto_examine": true, 00:31:35.298 "bdev_io_cache_size": 256, 00:31:35.298 "bdev_io_pool_size": 65535, 00:31:35.298 "iobuf_large_cache_size": 16, 00:31:35.298 "iobuf_small_cache_size": 128 00:31:35.298 } 00:31:35.298 }, 00:31:35.298 { 00:31:35.298 "method": "bdev_raid_set_options", 00:31:35.298 "params": { 00:31:35.298 "process_window_size_kb": 1024 00:31:35.298 } 00:31:35.298 }, 00:31:35.298 { 00:31:35.298 "method": "bdev_iscsi_set_options", 00:31:35.298 "params": { 00:31:35.298 "timeout_sec": 30 00:31:35.298 } 00:31:35.298 }, 00:31:35.298 { 00:31:35.298 "method": "bdev_nvme_set_options", 00:31:35.298 "params": { 00:31:35.298 "action_on_timeout": "none", 00:31:35.298 "allow_accel_sequence": false, 00:31:35.298 "arbitration_burst": 0, 00:31:35.298 "bdev_retry_count": 3, 00:31:35.298 "ctrlr_loss_timeout_sec": 0, 00:31:35.298 "delay_cmd_submit": true, 00:31:35.298 "dhchap_dhgroups": [ 00:31:35.298 "null", 00:31:35.298 "ffdhe2048", 00:31:35.298 "ffdhe3072", 00:31:35.298 "ffdhe4096", 00:31:35.298 "ffdhe6144", 00:31:35.298 "ffdhe8192" 00:31:35.298 ], 00:31:35.298 "dhchap_digests": [ 00:31:35.298 "sha256", 00:31:35.298 "sha384", 00:31:35.298 "sha512" 00:31:35.298 ], 00:31:35.298 "disable_auto_failback": false, 00:31:35.298 "fast_io_fail_timeout_sec": 0, 00:31:35.298 "generate_uuids": false, 00:31:35.298 "high_priority_weight": 0, 00:31:35.298 "io_path_stat": false, 00:31:35.298 "io_queue_requests": 512, 00:31:35.298 "keep_alive_timeout_ms": 10000, 00:31:35.298 "low_priority_weight": 0, 00:31:35.298 "medium_priority_weight": 0, 00:31:35.298 "nvme_adminq_poll_period_us": 10000, 00:31:35.298 "nvme_error_stat": false, 00:31:35.298 "nvme_ioq_poll_period_us": 0, 00:31:35.298 "rdma_cm_event_timeout_ms": 0, 00:31:35.298 "rdma_max_cq_size": 0, 00:31:35.298 "rdma_srq_size": 0, 00:31:35.298 "reconnect_delay_sec": 0, 00:31:35.298 "timeout_admin_us": 0, 00:31:35.298 "timeout_us": 0, 00:31:35.298 "transport_ack_timeout": 0, 00:31:35.298 "transport_retry_count": 4, 00:31:35.298 "transport_tos": 0 00:31:35.298 } 00:31:35.298 }, 00:31:35.298 { 00:31:35.298 "method": "bdev_nvme_attach_controller", 00:31:35.298 "params": { 00:31:35.298 "adrfam": "IPv4", 00:31:35.298 "ctrlr_loss_timeout_sec": 0, 00:31:35.298 "ddgst": false, 00:31:35.298 "fast_io_fail_timeout_sec": 0, 00:31:35.298 "hdgst": false, 00:31:35.298 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:35.298 "name": "nvme0", 00:31:35.298 "prchk_guard": false, 00:31:35.298 "prchk_reftag": false, 00:31:35.298 "psk": "key0", 00:31:35.298 "reconnect_delay_sec": 0, 00:31:35.298 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:35.298 "traddr": "127.0.0.1", 00:31:35.298 "trsvcid": "4420", 00:31:35.298 "trtype": "TCP" 00:31:35.298 } 00:31:35.299 }, 00:31:35.299 { 00:31:35.299 "method": "bdev_nvme_set_hotplug", 00:31:35.299 "params": { 00:31:35.299 "enable": false, 00:31:35.299 "period_us": 100000 00:31:35.299 } 00:31:35.299 }, 00:31:35.299 { 00:31:35.299 "method": "bdev_wait_for_examine" 00:31:35.299 } 00:31:35.299 ] 00:31:35.299 }, 00:31:35.299 { 00:31:35.299 "subsystem": "nbd", 00:31:35.299 "config": [] 00:31:35.299 } 00:31:35.299 ] 00:31:35.299 }' 00:31:35.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:35.299 13:30:31 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:35.299 13:30:31 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:35.299 13:30:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:35.299 [2024-07-15 13:30:31.966872] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:35.299 [2024-07-15 13:30:31.966958] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118967 ] 00:31:35.556 [2024-07-15 13:30:32.100172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.556 [2024-07-15 13:30:32.171355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.815 [2024-07-15 13:30:32.351395] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:36.381 13:30:32 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:36.381 13:30:32 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:31:36.381 13:30:32 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:31:36.381 13:30:32 keyring_file -- keyring/file.sh@120 -- # jq length 00:31:36.381 13:30:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:36.640 13:30:33 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:31:36.640 13:30:33 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:31:36.640 13:30:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:36.640 13:30:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:36.640 13:30:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:36.640 13:30:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:36.640 13:30:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:36.899 13:30:33 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:36.899 13:30:33 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:31:36.899 13:30:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:36.899 13:30:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:36.899 13:30:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:36.899 13:30:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:36.899 13:30:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:37.157 13:30:33 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:31:37.157 13:30:33 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:31:37.157 13:30:33 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:31:37.157 13:30:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:37.414 13:30:34 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:31:37.414 13:30:34 keyring_file -- keyring/file.sh@1 -- # cleanup 00:31:37.415 13:30:34 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.4O0X22PX7U /tmp/tmp.xGLpxrPEMk 00:31:37.415 13:30:34 keyring_file -- keyring/file.sh@20 -- # killprocess 118967 00:31:37.415 13:30:34 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 118967 ']' 00:31:37.415 13:30:34 keyring_file -- common/autotest_common.sh@950 -- # kill -0 118967 00:31:37.415 13:30:34 keyring_file -- common/autotest_common.sh@951 -- # uname 00:31:37.415 13:30:34 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:37.415 13:30:34 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 118967 00:31:37.415 killing process with pid 118967 00:31:37.415 Received shutdown signal, test time was about 1.000000 seconds 00:31:37.415 00:31:37.415 Latency(us) 00:31:37.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.415 =================================================================================================================== 00:31:37.415 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:37.415 13:30:34 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:37.415 13:30:34 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:37.415 13:30:34 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 118967' 00:31:37.415 13:30:34 keyring_file -- common/autotest_common.sh@965 -- # kill 118967 00:31:37.415 13:30:34 keyring_file -- common/autotest_common.sh@970 -- # wait 118967 00:31:37.672 13:30:34 keyring_file -- keyring/file.sh@21 -- # killprocess 118459 00:31:37.672 13:30:34 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 118459 ']' 00:31:37.672 13:30:34 keyring_file -- common/autotest_common.sh@950 -- # kill -0 118459 00:31:37.672 13:30:34 keyring_file -- common/autotest_common.sh@951 -- # uname 00:31:37.672 13:30:34 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:37.672 13:30:34 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 118459 00:31:37.672 killing process with pid 118459 00:31:37.672 13:30:34 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:37.672 13:30:34 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:37.672 13:30:34 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 118459' 00:31:37.672 13:30:34 keyring_file -- common/autotest_common.sh@965 -- # kill 118459 00:31:37.672 [2024-07-15 13:30:34.244156] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:37.672 13:30:34 keyring_file -- common/autotest_common.sh@970 -- # wait 118459 00:31:37.930 00:31:37.930 real 0m15.163s 00:31:37.930 user 0m37.427s 00:31:37.930 sys 0m3.270s 00:31:37.930 13:30:34 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:37.930 13:30:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:37.930 ************************************ 00:31:37.930 END TEST keyring_file 00:31:37.930 ************************************ 00:31:37.930 13:30:34 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:31:37.930 13:30:34 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:31:37.930 13:30:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:37.930 13:30:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:37.930 13:30:34 -- common/autotest_common.sh@10 -- # set +x 00:31:37.930 ************************************ 00:31:37.930 START TEST keyring_linux 00:31:37.930 ************************************ 00:31:37.930 13:30:34 keyring_linux -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:31:38.188 * Looking for test storage... 00:31:38.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:31:38.188 13:30:34 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:31:38.188 13:30:34 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:38.188 13:30:34 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:31:38.188 13:30:34 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:38.188 13:30:34 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:38.188 13:30:34 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:38.188 13:30:34 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:38.188 13:30:34 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:38.188 13:30:34 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:38.188 13:30:34 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:38.188 13:30:34 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:38.188 13:30:34 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:38.188 13:30:34 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:38.188 13:30:34 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c8b8b44b-387e-43b9-a950-dc0d98528a02 00:31:38.188 13:30:34 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=c8b8b44b-387e-43b9-a950-dc0d98528a02 00:31:38.188 13:30:34 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:38.188 13:30:34 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:38.188 13:30:34 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:38.188 13:30:34 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:38.188 13:30:34 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:38.188 13:30:34 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:38.188 13:30:34 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:38.188 13:30:34 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:38.188 13:30:34 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.188 13:30:34 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.189 13:30:34 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.189 13:30:34 keyring_linux -- paths/export.sh@5 -- # export PATH 00:31:38.189 13:30:34 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.189 13:30:34 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:31:38.189 13:30:34 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:38.189 13:30:34 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:38.189 13:30:34 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:38.189 13:30:34 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:38.189 13:30:34 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:38.189 13:30:34 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:38.189 13:30:34 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:38.189 13:30:34 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:38.189 13:30:34 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:38.189 13:30:34 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:38.189 13:30:34 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:38.189 13:30:34 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:31:38.189 13:30:34 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:31:38.189 13:30:34 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:31:38.189 13:30:34 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:31:38.189 13:30:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:38.189 13:30:34 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:31:38.189 13:30:34 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:38.189 13:30:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:38.189 13:30:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:31:38.189 13:30:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:38.189 13:30:34 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:38.189 13:30:34 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:38.189 13:30:34 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:38.189 13:30:34 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:38.189 13:30:34 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:38.189 13:30:34 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:38.189 13:30:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:31:38.189 /tmp/:spdk-test:key0 00:31:38.189 13:30:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:31:38.189 13:30:34 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:31:38.189 13:30:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:38.189 13:30:34 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:31:38.189 13:30:34 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:38.189 13:30:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:38.189 13:30:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:31:38.189 13:30:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:38.189 13:30:34 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:38.189 13:30:34 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:38.189 13:30:34 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:38.189 13:30:34 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:38.189 13:30:34 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:38.189 13:30:34 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:38.189 13:30:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:31:38.189 /tmp/:spdk-test:key1 00:31:38.189 13:30:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:31:38.189 13:30:34 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=119122 00:31:38.189 13:30:34 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:38.189 13:30:34 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 119122 00:31:38.189 13:30:34 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 119122 ']' 00:31:38.189 13:30:34 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.189 13:30:34 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:38.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.189 13:30:34 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.189 13:30:34 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:38.189 13:30:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:38.189 [2024-07-15 13:30:34.913724] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:38.189 [2024-07-15 13:30:34.913860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119122 ] 00:31:38.447 [2024-07-15 13:30:35.051647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.447 [2024-07-15 13:30:35.130937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:39.395 13:30:35 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:39.395 13:30:35 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:31:39.395 13:30:35 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:31:39.395 13:30:35 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.395 13:30:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:39.395 [2024-07-15 13:30:35.909344] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:39.395 null0 00:31:39.395 [2024-07-15 13:30:35.941300] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:39.395 [2024-07-15 13:30:35.941547] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:39.395 13:30:35 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.395 13:30:35 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:31:39.395 719287424 00:31:39.395 13:30:35 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:31:39.395 810275772 00:31:39.395 13:30:35 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=119158 00:31:39.395 13:30:35 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 119158 /var/tmp/bperf.sock 00:31:39.395 13:30:35 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:31:39.395 13:30:35 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 119158 ']' 00:31:39.395 13:30:35 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:39.396 13:30:35 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:39.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:39.396 13:30:35 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:39.396 13:30:35 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:39.396 13:30:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:39.396 [2024-07-15 13:30:36.024351] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:39.396 [2024-07-15 13:30:36.024453] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119158 ] 00:31:39.653 [2024-07-15 13:30:36.165584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.653 [2024-07-15 13:30:36.243621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.219 13:30:36 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:40.219 13:30:36 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:31:40.219 13:30:36 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:31:40.219 13:30:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:31:40.477 13:30:37 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:31:40.477 13:30:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:40.737 13:30:37 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:40.737 13:30:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:40.995 [2024-07-15 13:30:37.643986] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:40.995 nvme0n1 00:31:40.995 13:30:37 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:31:40.995 13:30:37 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:31:40.995 13:30:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:41.252 13:30:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:41.252 13:30:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:41.252 13:30:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:41.510 13:30:37 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:31:41.510 13:30:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:41.510 13:30:38 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:31:41.510 13:30:38 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:31:41.510 13:30:38 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:41.510 13:30:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:41.510 13:30:38 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:31:41.769 13:30:38 keyring_linux -- keyring/linux.sh@25 -- # sn=719287424 00:31:41.769 13:30:38 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:31:41.769 13:30:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:41.769 13:30:38 keyring_linux -- keyring/linux.sh@26 -- # [[ 719287424 == \7\1\9\2\8\7\4\2\4 ]] 00:31:41.769 13:30:38 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 719287424 00:31:41.769 13:30:38 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:31:41.769 13:30:38 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:41.769 Running I/O for 1 seconds... 00:31:42.703 00:31:42.703 Latency(us) 00:31:42.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.703 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:42.703 nvme0n1 : 1.01 12012.15 46.92 0.00 0.00 10593.88 8340.95 18230.92 00:31:42.703 =================================================================================================================== 00:31:42.703 Total : 12012.15 46.92 0.00 0.00 10593.88 8340.95 18230.92 00:31:42.703 0 00:31:42.703 13:30:39 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:42.703 13:30:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:43.268 13:30:39 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:31:43.268 13:30:39 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:31:43.268 13:30:39 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:43.268 13:30:39 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:43.268 13:30:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:43.268 13:30:39 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:43.268 13:30:40 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:31:43.268 13:30:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:43.268 13:30:40 keyring_linux -- keyring/linux.sh@23 -- # return 00:31:43.268 13:30:40 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:43.268 13:30:40 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:31:43.268 13:30:40 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:43.268 13:30:40 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:43.268 13:30:40 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:43.268 13:30:40 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:43.525 13:30:40 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:43.525 13:30:40 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:43.525 13:30:40 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:43.525 [2024-07-15 13:30:40.204859] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:43.525 [2024-07-15 13:30:40.205457] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128f180 (107): Transport endpoint is not connected 00:31:43.525 [2024-07-15 13:30:40.206440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128f180 (9): Bad file descriptor 00:31:43.525 [2024-07-15 13:30:40.207437] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:43.525 [2024-07-15 13:30:40.207467] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:43.525 [2024-07-15 13:30:40.207478] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:43.525 2024/07/15 13:30:40 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:43.525 request: 00:31:43.525 { 00:31:43.525 "method": "bdev_nvme_attach_controller", 00:31:43.525 "params": { 00:31:43.525 "name": "nvme0", 00:31:43.525 "trtype": "tcp", 00:31:43.525 "traddr": "127.0.0.1", 00:31:43.525 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:43.525 "adrfam": "ipv4", 00:31:43.525 "trsvcid": "4420", 00:31:43.525 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:43.525 "psk": ":spdk-test:key1" 00:31:43.525 } 00:31:43.525 } 00:31:43.525 Got JSON-RPC error response 00:31:43.525 GoRPCClient: error on JSON-RPC call 00:31:43.525 13:30:40 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:31:43.525 13:30:40 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:43.525 13:30:40 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:43.525 13:30:40 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:43.525 13:30:40 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:31:43.525 13:30:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:43.525 13:30:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:31:43.525 13:30:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:31:43.525 13:30:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:31:43.525 13:30:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:43.525 13:30:40 keyring_linux -- keyring/linux.sh@33 -- # sn=719287424 00:31:43.525 13:30:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 719287424 00:31:43.525 1 links removed 00:31:43.525 13:30:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:43.525 13:30:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:31:43.525 13:30:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:31:43.525 13:30:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:31:43.525 13:30:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:31:43.525 13:30:40 keyring_linux -- keyring/linux.sh@33 -- # sn=810275772 00:31:43.525 13:30:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 810275772 00:31:43.525 1 links removed 00:31:43.525 13:30:40 keyring_linux -- keyring/linux.sh@41 -- # killprocess 119158 00:31:43.525 13:30:40 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 119158 ']' 00:31:43.525 13:30:40 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 119158 00:31:43.525 13:30:40 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:31:43.525 13:30:40 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:43.525 13:30:40 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 119158 00:31:43.525 13:30:40 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:43.525 13:30:40 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:43.525 killing process with pid 119158 00:31:43.525 13:30:40 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 119158' 00:31:43.782 13:30:40 keyring_linux -- common/autotest_common.sh@965 -- # kill 119158 00:31:43.782 Received shutdown signal, test time was about 1.000000 seconds 00:31:43.782 00:31:43.782 Latency(us) 00:31:43.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.782 =================================================================================================================== 00:31:43.782 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:43.782 13:30:40 keyring_linux -- common/autotest_common.sh@970 -- # wait 119158 00:31:43.782 13:30:40 keyring_linux -- keyring/linux.sh@42 -- # killprocess 119122 00:31:43.782 13:30:40 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 119122 ']' 00:31:43.782 13:30:40 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 119122 00:31:43.782 13:30:40 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:31:43.782 13:30:40 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:43.782 13:30:40 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 119122 00:31:43.782 13:30:40 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:43.782 13:30:40 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:43.782 killing process with pid 119122 00:31:43.782 13:30:40 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 119122' 00:31:43.782 13:30:40 keyring_linux -- common/autotest_common.sh@965 -- # kill 119122 00:31:43.782 13:30:40 keyring_linux -- common/autotest_common.sh@970 -- # wait 119122 00:31:44.347 00:31:44.347 real 0m6.186s 00:31:44.347 user 0m11.879s 00:31:44.347 sys 0m1.651s 00:31:44.347 13:30:40 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:44.347 13:30:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:44.347 ************************************ 00:31:44.347 END TEST keyring_linux 00:31:44.347 ************************************ 00:31:44.347 13:30:40 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:31:44.347 13:30:40 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:44.347 13:30:40 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:31:44.347 13:30:40 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:31:44.347 13:30:40 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:31:44.347 13:30:40 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:31:44.347 13:30:40 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:44.347 13:30:40 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:44.347 13:30:40 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:44.347 13:30:40 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:44.347 13:30:40 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:44.347 13:30:40 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:31:44.347 13:30:40 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:44.347 13:30:40 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:44.347 13:30:40 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:44.347 13:30:40 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:31:44.347 13:30:40 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:31:44.347 13:30:40 -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:44.347 13:30:40 -- common/autotest_common.sh@10 -- # set +x 00:31:44.347 13:30:40 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:31:44.347 13:30:40 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:31:44.347 13:30:40 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:31:44.347 13:30:40 -- common/autotest_common.sh@10 -- # set +x 00:31:46.245 INFO: APP EXITING 00:31:46.245 INFO: killing all VMs 00:31:46.245 INFO: killing vhost app 00:31:46.245 INFO: EXIT DONE 00:31:46.502 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:46.502 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:46.502 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:47.437 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:47.437 Cleaning 00:31:47.437 Removing: /var/run/dpdk/spdk0/config 00:31:47.437 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:47.437 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:47.437 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:47.437 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:47.437 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:47.437 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:47.437 Removing: /var/run/dpdk/spdk1/config 00:31:47.437 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:47.437 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:47.437 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:47.437 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:47.437 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:47.437 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:47.437 Removing: /var/run/dpdk/spdk2/config 00:31:47.437 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:47.437 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:47.437 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:47.437 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:47.437 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:47.437 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:47.437 Removing: /var/run/dpdk/spdk3/config 00:31:47.437 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:47.437 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:47.437 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:47.437 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:47.437 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:47.437 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:47.437 Removing: /var/run/dpdk/spdk4/config 00:31:47.437 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:47.437 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:47.437 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:47.437 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:47.437 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:47.437 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:47.437 Removing: /dev/shm/nvmf_trace.0 00:31:47.437 Removing: /dev/shm/spdk_tgt_trace.pid72723 00:31:47.437 Removing: /var/run/dpdk/spdk0 00:31:47.437 Removing: /var/run/dpdk/spdk1 00:31:47.437 Removing: /var/run/dpdk/spdk2 00:31:47.437 Removing: /var/run/dpdk/spdk3 00:31:47.437 Removing: /var/run/dpdk/spdk4 00:31:47.437 Removing: /var/run/dpdk/spdk_pid100090 00:31:47.437 Removing: /var/run/dpdk/spdk_pid100213 00:31:47.437 Removing: /var/run/dpdk/spdk_pid100463 00:31:47.437 Removing: /var/run/dpdk/spdk_pid100589 00:31:47.437 Removing: /var/run/dpdk/spdk_pid100724 00:31:47.437 Removing: /var/run/dpdk/spdk_pid101072 00:31:47.437 Removing: /var/run/dpdk/spdk_pid101455 00:31:47.437 Removing: /var/run/dpdk/spdk_pid101463 00:31:47.437 Removing: /var/run/dpdk/spdk_pid103675 00:31:47.437 Removing: /var/run/dpdk/spdk_pid103983 00:31:47.437 Removing: /var/run/dpdk/spdk_pid104479 00:31:47.437 Removing: /var/run/dpdk/spdk_pid104482 00:31:47.437 Removing: /var/run/dpdk/spdk_pid104822 00:31:47.437 Removing: /var/run/dpdk/spdk_pid104836 00:31:47.437 Removing: /var/run/dpdk/spdk_pid104850 00:31:47.437 Removing: /var/run/dpdk/spdk_pid104877 00:31:47.437 Removing: /var/run/dpdk/spdk_pid104887 00:31:47.437 Removing: /var/run/dpdk/spdk_pid105033 00:31:47.437 Removing: /var/run/dpdk/spdk_pid105035 00:31:47.437 Removing: /var/run/dpdk/spdk_pid105138 00:31:47.437 Removing: /var/run/dpdk/spdk_pid105140 00:31:47.437 Removing: /var/run/dpdk/spdk_pid105243 00:31:47.437 Removing: /var/run/dpdk/spdk_pid105255 00:31:47.437 Removing: /var/run/dpdk/spdk_pid105715 00:31:47.437 Removing: /var/run/dpdk/spdk_pid105764 00:31:47.437 Removing: /var/run/dpdk/spdk_pid105922 00:31:47.437 Removing: /var/run/dpdk/spdk_pid106038 00:31:47.437 Removing: /var/run/dpdk/spdk_pid106431 00:31:47.437 Removing: /var/run/dpdk/spdk_pid106658 00:31:47.437 Removing: /var/run/dpdk/spdk_pid107142 00:31:47.437 Removing: /var/run/dpdk/spdk_pid107731 00:31:47.437 Removing: /var/run/dpdk/spdk_pid109082 00:31:47.437 Removing: /var/run/dpdk/spdk_pid109674 00:31:47.437 Removing: /var/run/dpdk/spdk_pid109676 00:31:47.437 Removing: /var/run/dpdk/spdk_pid111607 00:31:47.437 Removing: /var/run/dpdk/spdk_pid111692 00:31:47.437 Removing: /var/run/dpdk/spdk_pid111781 00:31:47.437 Removing: /var/run/dpdk/spdk_pid111873 00:31:47.437 Removing: /var/run/dpdk/spdk_pid112017 00:31:47.437 Removing: /var/run/dpdk/spdk_pid112104 00:31:47.437 Removing: /var/run/dpdk/spdk_pid112189 00:31:47.437 Removing: /var/run/dpdk/spdk_pid112284 00:31:47.437 Removing: /var/run/dpdk/spdk_pid112621 00:31:47.437 Removing: /var/run/dpdk/spdk_pid113301 00:31:47.437 Removing: /var/run/dpdk/spdk_pid114631 00:31:47.437 Removing: /var/run/dpdk/spdk_pid114836 00:31:47.437 Removing: /var/run/dpdk/spdk_pid115112 00:31:47.437 Removing: /var/run/dpdk/spdk_pid115413 00:31:47.437 Removing: /var/run/dpdk/spdk_pid115967 00:31:47.437 Removing: /var/run/dpdk/spdk_pid115972 00:31:47.437 Removing: /var/run/dpdk/spdk_pid116331 00:31:47.699 Removing: /var/run/dpdk/spdk_pid116483 00:31:47.699 Removing: /var/run/dpdk/spdk_pid116636 00:31:47.699 Removing: /var/run/dpdk/spdk_pid116728 00:31:47.699 Removing: /var/run/dpdk/spdk_pid116883 00:31:47.699 Removing: /var/run/dpdk/spdk_pid116987 00:31:47.699 Removing: /var/run/dpdk/spdk_pid117658 00:31:47.699 Removing: /var/run/dpdk/spdk_pid117688 00:31:47.699 Removing: /var/run/dpdk/spdk_pid117729 00:31:47.699 Removing: /var/run/dpdk/spdk_pid117971 00:31:47.699 Removing: /var/run/dpdk/spdk_pid118011 00:31:47.699 Removing: /var/run/dpdk/spdk_pid118042 00:31:47.699 Removing: /var/run/dpdk/spdk_pid118459 00:31:47.699 Removing: /var/run/dpdk/spdk_pid118494 00:31:47.699 Removing: /var/run/dpdk/spdk_pid118967 00:31:47.699 Removing: /var/run/dpdk/spdk_pid119122 00:31:47.699 Removing: /var/run/dpdk/spdk_pid119158 00:31:47.699 Removing: /var/run/dpdk/spdk_pid72578 00:31:47.699 Removing: /var/run/dpdk/spdk_pid72723 00:31:47.699 Removing: /var/run/dpdk/spdk_pid72984 00:31:47.699 Removing: /var/run/dpdk/spdk_pid73071 00:31:47.699 Removing: /var/run/dpdk/spdk_pid73116 00:31:47.699 Removing: /var/run/dpdk/spdk_pid73220 00:31:47.699 Removing: /var/run/dpdk/spdk_pid73250 00:31:47.699 Removing: /var/run/dpdk/spdk_pid73378 00:31:47.699 Removing: /var/run/dpdk/spdk_pid73648 00:31:47.699 Removing: /var/run/dpdk/spdk_pid73824 00:31:47.699 Removing: /var/run/dpdk/spdk_pid73895 00:31:47.699 Removing: /var/run/dpdk/spdk_pid73987 00:31:47.699 Removing: /var/run/dpdk/spdk_pid74082 00:31:47.699 Removing: /var/run/dpdk/spdk_pid74115 00:31:47.699 Removing: /var/run/dpdk/spdk_pid74151 00:31:47.699 Removing: /var/run/dpdk/spdk_pid74212 00:31:47.699 Removing: /var/run/dpdk/spdk_pid74330 00:31:47.699 Removing: /var/run/dpdk/spdk_pid74943 00:31:47.699 Removing: /var/run/dpdk/spdk_pid75007 00:31:47.699 Removing: /var/run/dpdk/spdk_pid75077 00:31:47.699 Removing: /var/run/dpdk/spdk_pid75105 00:31:47.699 Removing: /var/run/dpdk/spdk_pid75184 00:31:47.699 Removing: /var/run/dpdk/spdk_pid75212 00:31:47.699 Removing: /var/run/dpdk/spdk_pid75291 00:31:47.699 Removing: /var/run/dpdk/spdk_pid75321 00:31:47.699 Removing: /var/run/dpdk/spdk_pid75367 00:31:47.699 Removing: /var/run/dpdk/spdk_pid75402 00:31:47.699 Removing: /var/run/dpdk/spdk_pid75449 00:31:47.699 Removing: /var/run/dpdk/spdk_pid75479 00:31:47.699 Removing: /var/run/dpdk/spdk_pid75625 00:31:47.699 Removing: /var/run/dpdk/spdk_pid75661 00:31:47.699 Removing: /var/run/dpdk/spdk_pid75735 00:31:47.699 Removing: /var/run/dpdk/spdk_pid75805 00:31:47.699 Removing: /var/run/dpdk/spdk_pid75829 00:31:47.699 Removing: /var/run/dpdk/spdk_pid75892 00:31:47.699 Removing: /var/run/dpdk/spdk_pid75922 00:31:47.699 Removing: /var/run/dpdk/spdk_pid75957 00:31:47.700 Removing: /var/run/dpdk/spdk_pid75991 00:31:47.700 Removing: /var/run/dpdk/spdk_pid76026 00:31:47.700 Removing: /var/run/dpdk/spdk_pid76065 00:31:47.700 Removing: /var/run/dpdk/spdk_pid76095 00:31:47.700 Removing: /var/run/dpdk/spdk_pid76130 00:31:47.700 Removing: /var/run/dpdk/spdk_pid76164 00:31:47.700 Removing: /var/run/dpdk/spdk_pid76204 00:31:47.700 Removing: /var/run/dpdk/spdk_pid76233 00:31:47.700 Removing: /var/run/dpdk/spdk_pid76273 00:31:47.700 Removing: /var/run/dpdk/spdk_pid76302 00:31:47.700 Removing: /var/run/dpdk/spdk_pid76344 00:31:47.700 Removing: /var/run/dpdk/spdk_pid76373 00:31:47.700 Removing: /var/run/dpdk/spdk_pid76412 00:31:47.700 Removing: /var/run/dpdk/spdk_pid76442 00:31:47.700 Removing: /var/run/dpdk/spdk_pid76484 00:31:47.700 Removing: /var/run/dpdk/spdk_pid76517 00:31:47.700 Removing: /var/run/dpdk/spdk_pid76552 00:31:47.700 Removing: /var/run/dpdk/spdk_pid76587 00:31:47.700 Removing: /var/run/dpdk/spdk_pid76657 00:31:47.700 Removing: /var/run/dpdk/spdk_pid76763 00:31:47.700 Removing: /var/run/dpdk/spdk_pid77175 00:31:47.700 Removing: /var/run/dpdk/spdk_pid83893 00:31:47.700 Removing: /var/run/dpdk/spdk_pid84234 00:31:47.700 Removing: /var/run/dpdk/spdk_pid86637 00:31:47.700 Removing: /var/run/dpdk/spdk_pid87010 00:31:47.700 Removing: /var/run/dpdk/spdk_pid87253 00:31:47.700 Removing: /var/run/dpdk/spdk_pid87299 00:31:47.700 Removing: /var/run/dpdk/spdk_pid88191 00:31:47.700 Removing: /var/run/dpdk/spdk_pid88247 00:31:47.700 Removing: /var/run/dpdk/spdk_pid88604 00:31:47.700 Removing: /var/run/dpdk/spdk_pid89135 00:31:47.700 Removing: /var/run/dpdk/spdk_pid89581 00:31:47.700 Removing: /var/run/dpdk/spdk_pid90546 00:31:47.700 Removing: /var/run/dpdk/spdk_pid91507 00:31:47.700 Removing: /var/run/dpdk/spdk_pid91624 00:31:47.961 Removing: /var/run/dpdk/spdk_pid91693 00:31:47.961 Removing: /var/run/dpdk/spdk_pid93150 00:31:47.961 Removing: /var/run/dpdk/spdk_pid93376 00:31:47.961 Removing: /var/run/dpdk/spdk_pid98699 00:31:47.961 Removing: /var/run/dpdk/spdk_pid99119 00:31:47.961 Removing: /var/run/dpdk/spdk_pid99222 00:31:47.961 Removing: /var/run/dpdk/spdk_pid99374 00:31:47.961 Removing: /var/run/dpdk/spdk_pid99420 00:31:47.961 Removing: /var/run/dpdk/spdk_pid99465 00:31:47.961 Removing: /var/run/dpdk/spdk_pid99510 00:31:47.961 Removing: /var/run/dpdk/spdk_pid99673 00:31:47.961 Removing: /var/run/dpdk/spdk_pid99825 00:31:47.961 Clean 00:31:47.961 13:30:44 -- common/autotest_common.sh@1447 -- # return 0 00:31:47.961 13:30:44 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:31:47.961 13:30:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:47.961 13:30:44 -- common/autotest_common.sh@10 -- # set +x 00:31:47.961 13:30:44 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:31:47.961 13:30:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:47.961 13:30:44 -- common/autotest_common.sh@10 -- # set +x 00:31:47.961 13:30:44 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:47.961 13:30:44 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:47.961 13:30:44 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:47.961 13:30:44 -- spdk/autotest.sh@391 -- # hash lcov 00:31:47.961 13:30:44 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:47.961 13:30:44 -- spdk/autotest.sh@393 -- # hostname 00:31:47.961 13:30:44 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:48.220 geninfo: WARNING: invalid characters removed from testname! 00:32:10.144 13:31:06 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:13.447 13:31:09 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:15.342 13:31:11 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:17.872 13:31:14 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:20.449 13:31:16 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:22.993 13:31:19 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:24.893 13:31:21 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:24.893 13:31:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:24.893 13:31:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:24.893 13:31:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:24.893 13:31:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:24.893 13:31:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.893 13:31:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.893 13:31:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.893 13:31:21 -- paths/export.sh@5 -- $ export PATH 00:32:24.893 13:31:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.893 13:31:21 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:32:24.893 13:31:21 -- common/autobuild_common.sh@437 -- $ date +%s 00:32:25.152 13:31:21 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721050281.XXXXXX 00:32:25.152 13:31:21 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721050281.qlCd8C 00:32:25.152 13:31:21 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:32:25.152 13:31:21 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:32:25.152 13:31:21 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:32:25.152 13:31:21 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:32:25.152 13:31:21 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:32:25.152 13:31:21 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:32:25.152 13:31:21 -- common/autobuild_common.sh@453 -- $ get_config_params 00:32:25.152 13:31:21 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:32:25.152 13:31:21 -- common/autotest_common.sh@10 -- $ set +x 00:32:25.152 13:31:21 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:32:25.152 13:31:21 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:32:25.152 13:31:21 -- pm/common@17 -- $ local monitor 00:32:25.152 13:31:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:25.152 13:31:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:25.152 13:31:21 -- pm/common@25 -- $ sleep 1 00:32:25.152 13:31:21 -- pm/common@21 -- $ date +%s 00:32:25.152 13:31:21 -- pm/common@21 -- $ date +%s 00:32:25.152 13:31:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721050281 00:32:25.152 13:31:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721050281 00:32:25.152 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721050281_collect-vmstat.pm.log 00:32:25.152 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721050281_collect-cpu-load.pm.log 00:32:26.086 13:31:22 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:32:26.086 13:31:22 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:32:26.086 13:31:22 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:32:26.086 13:31:22 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:26.086 13:31:22 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:32:26.086 13:31:22 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:26.086 13:31:22 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:26.086 13:31:22 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:26.086 13:31:22 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:26.086 13:31:22 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:26.086 13:31:22 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:26.086 13:31:22 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:26.086 13:31:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:26.086 13:31:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:26.086 13:31:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:26.086 13:31:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:32:26.086 13:31:22 -- pm/common@44 -- $ pid=120890 00:32:26.086 13:31:22 -- pm/common@50 -- $ kill -TERM 120890 00:32:26.086 13:31:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:26.086 13:31:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:32:26.086 13:31:22 -- pm/common@44 -- $ pid=120892 00:32:26.086 13:31:22 -- pm/common@50 -- $ kill -TERM 120892 00:32:26.086 + [[ -n 6011 ]] 00:32:26.086 + sudo kill 6011 00:32:26.097 [Pipeline] } 00:32:26.117 [Pipeline] // timeout 00:32:26.123 [Pipeline] } 00:32:26.145 [Pipeline] // stage 00:32:26.150 [Pipeline] } 00:32:26.169 [Pipeline] // catchError 00:32:26.181 [Pipeline] stage 00:32:26.183 [Pipeline] { (Stop VM) 00:32:26.199 [Pipeline] sh 00:32:26.482 + vagrant halt 00:32:30.666 ==> default: Halting domain... 00:32:37.245 [Pipeline] sh 00:32:37.521 + vagrant destroy -f 00:32:40.803 ==> default: Removing domain... 00:32:40.816 [Pipeline] sh 00:32:41.094 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:32:41.103 [Pipeline] } 00:32:41.124 [Pipeline] // stage 00:32:41.130 [Pipeline] } 00:32:41.149 [Pipeline] // dir 00:32:41.154 [Pipeline] } 00:32:41.171 [Pipeline] // wrap 00:32:41.177 [Pipeline] } 00:32:41.194 [Pipeline] // catchError 00:32:41.203 [Pipeline] stage 00:32:41.205 [Pipeline] { (Epilogue) 00:32:41.220 [Pipeline] sh 00:32:41.506 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:46.800 [Pipeline] catchError 00:32:46.802 [Pipeline] { 00:32:46.814 [Pipeline] sh 00:32:47.093 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:47.093 Artifacts sizes are good 00:32:47.102 [Pipeline] } 00:32:47.118 [Pipeline] // catchError 00:32:47.128 [Pipeline] archiveArtifacts 00:32:47.135 Archiving artifacts 00:32:47.299 [Pipeline] cleanWs 00:32:47.310 [WS-CLEANUP] Deleting project workspace... 00:32:47.310 [WS-CLEANUP] Deferred wipeout is used... 00:32:47.316 [WS-CLEANUP] done 00:32:47.317 [Pipeline] } 00:32:47.335 [Pipeline] // stage 00:32:47.339 [Pipeline] } 00:32:47.353 [Pipeline] // node 00:32:47.359 [Pipeline] End of Pipeline 00:32:47.391 Finished: SUCCESS